Ranking Measures and Loss Functions in Learning to Rank (2009)(About) > While most learning-to-rank methods learn the ranking function by minimizing the loss functions, it is the ranking measures (such as NDCG and MAP) that are used to evaluate the performance of the learned ranking function. In this work, we reveal the relationship between ranking measures and loss functions in learning-to-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE.
> we have proved that many pairwise/listwise losses in learning to rank are actually upper bounds of measure-based ranking errors. As a result, the minimization of these loss functions will lead to the maximization of the ranking measures. The key to obtaining this result is to model ranking as a sequence of classification tasks, and define a so-called essential loss as the weighted sum of the classification errors of individual tasks in the sequence.
> We have also shown a way to improve existing methods
by introducing appropriate weights to their loss functions.
[1704.08803] Neural Ranking Models with Weak Supervision (2017)(About) Main Idea: To leverage large amounts of unsupervised data to infer “weak” labels and use that signal for learning supervised models as if we had the ground truth labels. See [blog post](/doc/?uri=http%3A%2F%2Fmostafadehghani.com%2F2017%2F04%2F23%2Fbeating-the-teacher-neural-ranking-models-with-weak-supervision%2F):
> This is truly awesome since we have only used BM25 as the supervisor to train a model which performs better than BM25 itself!
A Ranking Approach to Keyphrase Extraction - Microsoft Research (2009)(About) Previously, automatic keyphrase extraction was formalized as classification and learning methods for classification were utilized. This paper points out that it is more essential to cast the keyphrase extraction problem as ranking and employ a learning to rank method to perform the task. As example, it employs Ranking SVM, a state-of-art method of learning to rank, in keyphrase extraction