About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Ikuya Yamada
- sl:arxiv_num : 2010.01057
- sl:arxiv_published : 2020-10-02T15:38:03Z
- sl:arxiv_summary : Entity representations are useful in natural language tasks involving
entities. In this paper, we propose new pretrained contextualized
representations of words and entities based on the bidirectional transformer.
The proposed model treats words and entities in a given text as independent
tokens, and outputs contextualized representations of them. Our model is
trained using a new pretraining task based on the masked language model of
BERT. The task involves predicting randomly masked words and entities in a
large entity-annotated corpus retrieved from Wikipedia. We also propose an
entity-aware self-attention mechanism that is an extension of the
self-attention mechanism of the transformer, and considers the types of tokens
(words or entities) when computing attention scores. The proposed model
achieves impressive empirical performance on a wide range of entity-related
tasks. In particular, it obtains state-of-the-art results on five well-known
datasets: Open Entity (entity typing), TACRED (relation classification),
CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering),
and SQuAD 1.1 (extractive question answering). Our source code and pretrained
representations are available at https://github.com/studio-ousia/luke.@en
- sl:arxiv_title : LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention@en
- sl:arxiv_updated : 2020-10-02T15:38:03Z
- sl:bookmarkOf : https://arxiv.org/abs/2010.01057
- sl:creationDate : 2020-11-26
- sl:creationTime : 2020-11-26T16:21:30Z
Documents with similar tags (experimental)