About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Shijie Wu
- sl:arxiv_num : 1911.01464
- sl:arxiv_published : 2019-11-04T19:41:13Z
- sl:arxiv_summary : We study the problem of multilingual masked language modeling, i.e. the
training of a single model on concatenated text from multiple languages, and
present a detailed study of several factors that influence why these models are
so effective for cross-lingual transfer. We show, contrary to what was
previously hypothesized, that transfer is possible even when there is no shared
vocabulary across the monolingual corpora and also when the text comes from
very different domains. The only requirement is that there are some shared
parameters in the top layers of the multi-lingual encoder. To better understand
this result, we also show that representations from independently trained
models in different languages can be aligned post-hoc quite effectively,
strongly suggesting that, much like for non-contextual word embeddings, there
are universal latent symmetries in the learned embedding spaces. For
multilingual masked language modeling, these symmetries seem to be
automatically discovered and aligned during the joint training process.@en
- sl:arxiv_title : Emerging Cross-lingual Structure in Pretrained Language Models@en
- sl:arxiv_updated : 2019-11-10T06:55:02Z
- sl:bookmarkOf : https://arxiv.org/abs/1911.01464
- sl:creationDate : 2019-11-06
- sl:creationTime : 2019-11-06T13:09:03Z
Documents with similar tags (experimental)