About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Ian Tenney
- sl:arxiv_num : 1905.06316
- sl:arxiv_published : 2019-05-15T17:48:56Z
- sl:arxiv_summary : Contextualized representation models such as ELMo (Peters et al., 2018a) and
BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a
diverse array of downstream NLP tasks. Building on recent token-level probing
work, we introduce a novel edge probing task design and construct a broad suite
of sub-sentence tasks derived from the traditional structured NLP pipeline. We
probe word-level contextual representations from four recent models and
investigate how they encode sentence structure across a range of syntactic,
semantic, local, and long-range phenomena. We find that existing models trained
on language modeling and translation produce strong representations for
syntactic phenomena, but only offer comparably small improvements on semantic
tasks over a non-contextual baseline.@en
- sl:arxiv_title : What do you learn from context? Probing for sentence structure in contextualized word representations@en
- sl:arxiv_updated : 2019-05-15T17:48:56Z
- sl:bookmarkOf : https://arxiv.org/abs/1905.06316
- sl:creationDate : 2020-08-02
- sl:creationTime : 2020-08-02T11:25:38Z
Documents with similar tags (experimental)