About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Michiel de Jong
- sl:arxiv_num : 2110.06176
- sl:arxiv_published : 2021-10-12T17:19:05Z
- sl:arxiv_summary : Natural language understanding tasks such as open-domain question answering
often require retrieving and assimilating factual information from multiple
sources. We propose to address this problem by integrating a semi-parametric
representation of a large text corpus into a Transformer model as a source of
factual knowledge. Specifically, our method represents knowledge with `mention
memory', a table of dense vector representations of every entity mention in a
corpus. The proposed model - TOME - is a Transformer that accesses the
information through internal memory layers in which each entity mention in the
input passage attends to the mention memory. This approach enables synthesis of
and reasoning over many disparate sources of information within a single
Transformer model. In experiments using a memory of 150 million Wikipedia
mentions, TOME achieves strong performance on several open-domain
knowledge-intensive tasks, including the claim verification benchmarks HoVer
and FEVER and several entity-based QA benchmarks. We also show that the model
learns to attend to informative mentions without any direct supervision.
Finally we demonstrate that the model can generalize to new unseen entities by
updating the memory without retraining.@en
- sl:arxiv_title : Mention Memory: incorporating textual knowledge into Transformers through entity mention attention@en
- sl:arxiv_updated : 2021-10-12T17:19:05Z
- sl:bookmarkOf : https://arxiv.org/abs/2110.06176
- sl:creationDate : 2021-10-13
- sl:creationTime : 2021-10-13T15:55:04Z
- sl:relatedDoc :
Documents with similar tags (experimental)