About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Hongliang Dai
- sl:arxiv_num : 2106.04098
- sl:arxiv_published : 2021-06-08T04:43:28Z
- sl:arxiv_summary : Recently, there is an effort to extend fine-grained entity typing by using a
richer and ultra-fine set of types, and labeling noun phrases including
pronouns and nominal nouns instead of just named entity mentions. A key
challenge for this ultra-fine entity typing task is that human annotated data
are extremely scarce, and the annotation ability of existing distant or weak
supervision approaches is very limited. To remedy this problem, in this paper,
we propose to obtain training data for ultra-fine entity typing by using a BERT
Masked Language Model (MLM). Given a mention in a sentence, our approach
constructs an input for the BERT MLM so that it predicts context dependent
hypernyms of the mention, which can be used as type labels. Experimental
results demonstrate that, with the help of these automatically generated
labels, the performance of an ultra-fine entity typing model can be improved
substantially. We also show that our approach can be applied to improve
traditional fine-grained entity typing after performing simple type mapping.@en
- sl:arxiv_title : Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model@en
- sl:arxiv_updated : 2021-06-08T04:43:28Z
- sl:bookmarkOf : https://arxiv.org/abs/2106.04098
- sl:creationDate : 2021-06-16
- sl:creationTime : 2021-06-16T11:26:44Z
- sl:relatedDoc : http://www.semanlink.net/doc/2021/06/1807_04905_ultra_fine_entity_
Documents with similar tags (experimental)