About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Alan Ramponi
- sl:arxiv_num : 2006.00632
- sl:arxiv_published : 2020-05-31T22:34:14Z
- sl:arxiv_summary : Deep neural networks excel at learning from labeled data and achieve
state-of-the-art resultson a wide array of Natural Language Processing tasks.
In contrast, learning from unlabeled data, especially under domain shift,
remains a challenge. Motivated by the latest advances, in this survey we review
neural unsupervised domain adaptation techniques which do not require labeled
target domain data. This is a more challenging yet a more widely applicable
setup. We outline methods, from early traditional non-neural methods to
pre-trained model transfer. We also revisit the notion of domain, and we
uncover a bias in the type of Natural Language Processing tasks which received
most attention. Lastly, we outline future directions, particularly the broader
need for out-of-distribution generalization of future NLP.@en
- sl:arxiv_title : Neural Unsupervised Domain Adaptation in NLP---A Survey@en
- sl:arxiv_updated : 2020-10-28T08:24:14Z
- sl:bookmarkOf : https://arxiv.org/abs/2006.00632
- sl:creationDate : 2022-03-30
- sl:creationTime : 2022-03-30T01:13:03Z
Documents with similar tags (experimental)