About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Ankur Bapna
- sl:arxiv_num : 2205.03983
- sl:arxiv_published : 2022-05-09T00:24:13Z
- sl:arxiv_summary : In this paper we share findings from our effort to build practical machine
translation (MT) systems capable of translating across over one thousand
languages. We describe results in three research domains: (i) Building clean,
web-mined datasets for 1500+ languages by leveraging semi-supervised
pre-training for language identification and developing data-driven filtering
techniques; (ii) Developing practical MT models for under-served languages by
leveraging massively multilingual models trained with supervised parallel data
for over 100 high-resource languages and monolingual datasets for an additional
1000+ languages; and (iii) Studying the limitations of evaluation metrics for
these languages and conducting qualitative analysis of the outputs from our MT
models, highlighting several frequent error modes of these types of models. We
hope that our work provides useful insights to practitioners working towards
building MT systems for currently understudied languages, and highlights
research directions that can complement the weaknesses of massively
multilingual models in data-sparse settings.@en
- sl:arxiv_title : Building Machine Translation Systems for the Next Thousand Languages@en
- sl:arxiv_updated : 2022-05-09T00:24:13Z
- sl:bookmarkOf : https://arxiv.org/abs/2205.03983
- sl:creationDate : 2022-05-10
- sl:creationTime : 2022-05-10T08:00:10Z
Documents with similar tags (experimental)