About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Thibault Formal
- sl:arxiv_num : 2109.10086
- sl:arxiv_published : 2021-09-21T10:43:42Z
- sl:arxiv_summary : In neural Information Retrieval (IR), ongoing research is directed towards
improving the first retriever in ranking pipelines. Learning dense embeddings
to conduct retrieval using efficient approximate nearest neighbors methods has
proven to work well. Meanwhile, there has been a growing interest in learning
\emph{sparse} representations for documents and queries, that could inherit
from the desirable properties of bag-of-words models such as the exact matching
of terms and the efficiency of inverted indexes. Introduced recently, the
SPLADE model provides highly sparse representations and competitive results
with respect to state-of-the-art dense and sparse approaches. In this paper, we
build on SPLADE and propose several significant improvements in terms of
effectiveness and/or efficiency. More specifically, we modify the pooling
mechanism, benchmark a model solely based on document expansion, and introduce
models trained with distillation. We also report results on the BEIR benchmark.
Overall, SPLADE is considerably improved with more than $9$\% gains on NDCG@10
on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark.@en
- sl:arxiv_title : SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval@en
- sl:arxiv_updated : 2021-09-21T10:43:42Z
- sl:bookmarkOf : https://arxiv.org/abs/2109.10086
- sl:creationDate : 2023-07-26
- sl:creationTime : 2023-07-26T23:28:40Z
- sl:relatedDoc : http://www.semanlink.net/doc/2023/05/2107_05720_splade_sparse_lex
Documents with similar tags (experimental)