About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Zhilin Yang
- sl:arxiv_num : 1906.08237
- sl:arxiv_published : 2019-06-19T17:35:48Z
- sl:arxiv_summary : With the capability of modeling bidirectional contexts, denoising
autoencoding based pretraining like BERT achieves better performance than
pretraining approaches based on autoregressive language modeling. However,
relying on corrupting the input with masks, BERT neglects dependency between
the masked positions and suffers from a pretrain-finetune discrepancy. In light
of these pros and cons, we propose XLNet, a generalized autoregressive
pretraining method that (1) enables learning bidirectional contexts by
maximizing the expected likelihood over all permutations of the factorization
order and (2) overcomes the limitations of BERT thanks to its autoregressive
formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the
state-of-the-art autoregressive model, into pretraining. Empirically, under
comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a
large margin, including question answering, natural language inference,
sentiment analysis, and document ranking.@en
- sl:arxiv_title : XLNet: Generalized Autoregressive Pretraining for Language Understanding@en
- sl:arxiv_updated : 2020-01-02T12:48:08Z
- sl:bookmarkOf : https://arxiv.org/abs/1906.08237
- sl:creationDate : 2019-06-21
- sl:creationTime : 2019-06-21T16:29:51Z