About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Niklas Muennighoff
- sl:arxiv_num : 2210.07316
- sl:arxiv_published : 2022-10-13T19:42:08Z
- sl:arxiv_summary : Text embeddings are commonly evaluated on a small set of datasets from a
single task not covering their possible applications to other tasks. It is
unclear whether state-of-the-art embeddings on semantic textual similarity
(STS) can be equally well applied to other tasks like clustering or reranking.
This makes progress in the field difficult to track, as various models are
constantly being proposed without proper evaluation. To solve this problem, we
introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding
tasks covering a total of 56 datasets and 112 languages. Through the
benchmarking of 33 models on MTEB, we establish the most comprehensive
benchmark of text embeddings to date. We find that no particular text embedding
method dominates across all tasks. This suggests that the field has yet to
converge on a universal text embedding method and scale it up sufficiently to
provide state-of-the-art results on all embedding tasks. MTEB comes with
open-source code and a public leaderboard at
https://huggingface.co/spaces/mteb/leaderboard.@en
- sl:arxiv_title : MTEB: Massive Text Embedding Benchmark@en
- sl:arxiv_updated : 2022-10-13T19:42:08Z
- sl:bookmarkOf : https://arxiv.org/abs/2210.07316
- sl:creationDate : 2022-10-17
- sl:creationTime : 2022-10-17T17:13:34Z
Documents with similar tags (experimental)