About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Gautier Izacard
- sl:arxiv_num : 2012.04584
- sl:arxiv_published : 2020-12-08T17:36:34Z
- sl:arxiv_summary : The task of information retrieval is an important component of many natural
language processing systems, such as open domain question answering. While
traditional methods were based on hand-crafted features, continuous
representations based on neural networks recently obtained competitive results.
A challenge of using such methods is to obtain supervised data to train the
retriever model, corresponding to pairs of query and support documents. In this
paper, we propose a technique to learn retriever models for downstream tasks,
inspired by knowledge distillation, and which does not require annotated pairs
of query and documents. Our approach leverages attention scores of a reader
model, used to solve the task based on retrieved documents, to obtain synthetic
labels for the retriever. We evaluate our method on question answering,
obtaining state-of-the-art results.@en
- sl:arxiv_title : Distilling Knowledge from Reader to Retriever for Question Answering@en
- sl:arxiv_updated : 2020-12-08T17:36:34Z
- sl:bookmarkOf : https://arxiv.org/abs/2012.04584
- sl:creationDate : 2020-12-11
- sl:creationTime : 2020-12-11T16:48:13Z
- sl:relatedDoc :
Documents with similar tags (experimental)