About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Bruno Taillé
- sl:arxiv_num : 2001.08053
- sl:arxiv_published : 2020-01-22T15:15:34Z
- sl:arxiv_summary : Contextualized embeddings use unsupervised language model pretraining to
compute word representations depending on their context. This is intuitively
useful for generalization, especially in Named-Entity Recognition where it is
crucial to detect mentions never seen during training. However, standard
English benchmarks overestimate the importance of lexical over contextual
features because of an unrealistic lexical overlap between train and test
mentions. In this paper, we perform an empirical analysis of the generalization
capabilities of state-of-the-art contextualized embeddings by separating
mentions by novelty and with out-of-domain evaluation. We show that they are
particularly beneficial for unseen mentions detection, especially
out-of-domain. For models trained on CoNLL03, language model contextualization
leads to a +1.2% maximal relative micro-F1 score increase in-domain against
+13% out-of-domain on the WNUT dataset@en
- sl:arxiv_title : Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization@en
- sl:arxiv_updated : 2020-01-22T15:15:34Z
- sl:bookmarkOf : https://arxiv.org/abs/2001.08053
- sl:creationDate : 2020-10-01
- sl:creationTime : 2020-10-01T11:43:28Z
Documents with similar tags (experimental)