About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Jeffrey Ling
- sl:arxiv_num : 2001.03765
- sl:arxiv_published : 2020-01-11T15:30:56Z
- sl:arxiv_summary : Language modeling tasks, in which words, or word-pieces, are predicted on the
basis of a local context, have been very effective for learning word embeddings
and context dependent representations of phrases. Motivated by the observation
that efforts to code world knowledge into machine readable knowledge bases or
human readable encyclopedias tend to be entity-centric, we investigate the use
of a fill-in-the-blank task to learn context independent representations of
entities from the text contexts in which those entities were mentioned. We show
that large scale training of neural models allows us to learn high quality
entity representations, and we demonstrate successful results on four domains:
(1) existing entity-level typing benchmarks, including a 64% error reduction
over previous work on TypeNet (Murty et al., 2018); (2) a novel few-shot
category reconstruction task; (3) existing entity linking benchmarks, where we
match the state-of-the-art on CoNLL-Aida without linking-specific features and
obtain a score of 89.8% on TAC-KBP 2010 without using any alias table, external
knowledge base or in domain training data and (4) answering trivia questions,
which uniquely identify entities. Our global entity representations encode
fine-grained type categories, such as Scottish footballers, and can answer
trivia questions such as: Who was the last inmate of Spandau jail in Berlin?@en
- sl:arxiv_title : Learning Cross-Context Entity Representations from Text@en
- sl:arxiv_updated : 2020-01-11T15:30:56Z
- sl:bookmarkOf : https://arxiv.org/abs/2001.03765
- sl:creationDate : 2021-06-22
- sl:creationTime : 2021-06-22T13:42:19Z
Documents with similar tags (experimental)