About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Guoyin Wang
- sl:arxiv_num : 1805.04174
- sl:arxiv_published : 2018-05-10T20:42:52Z
- sl:arxiv_summary : Word embeddings are effective intermediate representations for capturing
semantic regularities between words, when learning the representations of text
sequences. We propose to view text classification as a label-word joint
embedding problem: each label is embedded in the same space with the word
vectors. We introduce an attention framework that measures the compatibility of
embeddings between text sequences and labels. The attention is learned on a
training set of labeled samples to ensure that, given a text sequence, the
relevant words are weighted higher than the irrelevant ones. Our method
maintains the interpretability of word embeddings, and enjoys a built-in
ability to leverage alternative sources of information, in addition to input
text sequences. Extensive results on the several large text datasets show that
the proposed framework outperforms the state-of-the-art methods by a large
margin, in terms of both accuracy and speed.@en
- sl:arxiv_title : Joint Embedding of Words and Labels for Text Classification@en
- sl:arxiv_updated : 2018-05-10T20:42:52Z
- sl:bookmarkOf : https://arxiv.org/abs/1805.04174
- sl:creationDate : 2020-02-18
- sl:creationTime : 2020-02-18T15:01:31Z
Documents with similar tags (experimental)