About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Xipeng Qiu
- sl:arxiv_num : 2003.08271
- sl:arxiv_published : 2020-03-18T15:22:51Z
- sl:arxiv_summary : Recently, the emergence of pre-trained models (PTMs) has brought natural
language processing (NLP) to a new era. In this survey, we provide a
comprehensive review of PTMs for NLP. We first briefly introduce language
representation learning and its research progress. Then we systematically
categorize existing PTMs based on a taxonomy with four perspectives. Next, we
describe how to adapt the knowledge of PTMs to the downstream tasks. Finally,
we outline some potential directions of PTMs for future research. This survey
is purposed to be a hands-on guide for understanding, using, and developing
PTMs for various NLP tasks.@en
- sl:arxiv_title : Pre-trained Models for Natural Language Processing: A Survey@en
- sl:arxiv_updated : 2020-03-24T10:32:40Z
- sl:bookmarkOf : https://arxiv.org/abs/2003.08271
- sl:creationDate : 2020-03-19
- sl:creationTime : 2020-03-19T13:34:50Z
Documents with similar tags (experimental)