About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Bohan Li
- sl:arxiv_num : 2011.05864
- sl:arxiv_published : 2020-11-02T13:14:57Z
- sl:arxiv_summary : Pre-trained contextual representations like BERT have achieved great success
in natural language processing. However, the sentence embeddings from the
pre-trained language models without fine-tuning have been found to poorly
capture semantic meaning of sentences. In this paper, we argue that the
semantic information in the BERT embeddings is not fully exploited. We first
reveal the theoretical connection between the masked language model
pre-training objective and the semantic similarity task theoretically, and then
analyze the BERT sentence embeddings empirically. We find that BERT always
induces a non-smooth anisotropic semantic space of sentences, which harms its
performance of semantic similarity. To address this issue, we propose to
transform the anisotropic sentence embedding distribution to a smooth and
isotropic Gaussian distribution through normalizing flows that are learned with
an unsupervised objective. Experimental results show that our proposed
BERT-flow method obtains significant performance gains over the
state-of-the-art sentence embeddings on a variety of semantic textual
similarity tasks. The code is available at
https://github.com/bohanli/BERT-flow.@en
- sl:arxiv_title : On the Sentence Embeddings from Pre-trained Language Models@en
- sl:arxiv_updated : 2020-11-02T13:14:57Z
- sl:bookmarkOf : https://arxiv.org/abs/2011.05864
- sl:creationDate : 2021-04-19
- sl:creationTime : 2021-04-19T01:13:25Z
Documents with similar tags (experimental)