About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Jimmy Lin
- sl:arxiv_num : 2010.06467
- sl:arxiv_published : 2020-10-13T15:20:32Z
- sl:arxiv_summary : The goal of text ranking is to generate an ordered list of texts retrieved
from a corpus in response to a query. Although the most common formulation of
text ranking is search, instances of the task can also be found in many natural
language processing applications. This survey provides an overview of text
ranking with neural network architectures known as transformers, of which BERT
is the best-known example. The combination of transformers and self-supervised
pretraining has, without exaggeration, revolutionized the fields of natural
language processing (NLP), information retrieval (IR), and beyond. In this
survey, we provide a synthesis of existing work as a single point of entry for
practitioners who wish to gain a better understanding of how to apply
transformers to text ranking problems and researchers who wish to pursue work
in this area. We cover a wide range of modern techniques, grouped into two
high-level categories: transformer models that perform reranking in multi-stage
ranking architectures and learned dense representations that attempt to perform
ranking directly. There are two themes that pervade our survey: techniques for
handling long documents, beyond the typical sentence-by-sentence processing
approaches used in NLP, and techniques for addressing the tradeoff between
effectiveness (result quality) and efficiency (query latency). Although
transformer architectures and pretraining techniques are recent innovations,
many aspects of how they are applied to text ranking are relatively well
understood and represent mature techniques. However, there remain many open
research questions, and thus in addition to laying out the foundations of
pretrained transformers for text ranking, this survey also attempts to
prognosticate where the field is heading.@en
- sl:arxiv_title : Pretrained Transformers for Text Ranking: BERT and Beyond@en
- sl:arxiv_updated : 2020-10-13T15:20:32Z
- sl:bookmarkOf : https://arxiv.org/abs/2010.06467
- sl:creationDate : 2021-07-09
- sl:creationTime : 2021-07-09T14:50:44Z
Documents with similar tags (experimental)