About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Alexis Conneau
- sl:arxiv_num : 1911.02116
- sl:arxiv_published : 2019-11-05T22:42:00Z
- sl:arxiv_summary : This paper shows that pretraining multilingual language models at scale leads
to significant performance gains for a wide range of cross-lingual transfer
tasks. We train a Transformer-based masked language model on one hundred
languages, using more than two terabytes of filtered CommonCrawl data. Our
model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a
variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI,
+13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs
particularly well on low-resource languages, improving 15.7% in XNLI accuracy
for Swahili and 11.4% for Urdu over previous XLM models. We also present a
detailed empirical analysis of the key factors that are required to achieve
these gains, including the trade-offs between (1) positive transfer and
capacity dilution and (2) the performance of high and low resource languages at
scale. Finally, we show, for the first time, the possibility of multilingual
modeling without sacrificing per-language performance; XLM-R is very
competitive with strong monolingual models on the GLUE and XNLI benchmarks. We
will make our code, data and models publicly available.@en
- sl:arxiv_title : Unsupervised Cross-lingual Representation Learning at Scale@en
- sl:arxiv_updated : 2020-04-08T01:02:17Z
- sl:bookmarOf : https://aclanthology.org/2020.acl-main.747.pdf
- sl:bookmarkOf : https://arxiv.org/abs/1911.02116
- sl:creationDate : 2021-07-29
- sl:creationTime : 2021-07-29T00:16:13Z
- sl:relatedDoc : http://www.semanlink.net/doc/2021/07/cc_100_monolingual_datasets_fr
Documents with similar tags (experimental)