About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Diane Bouchacourt
- sl:arxiv_num : 1905.11852
- sl:arxiv_published : 2019-05-28T14:33:19Z
- sl:arxiv_summary : Providing explanations along with predictions is crucial in some text
processing tasks. Therefore, we propose a new self-interpretable model that
performs output prediction and simultaneously provides an explanation in terms
of the presence of particular concepts in the input. To do so, our model's
prediction relies solely on a low-dimensional binary representation of the
input, where each feature denotes the presence or absence of concepts. The
presence of a concept is decided from an excerpt i.e. a small sequence of
consecutive words in the text. Relevant concepts for the prediction task at
hand are automatically defined by our model, avoiding the need for
concept-level annotations. To ease interpretability, we enforce that for each
concept, the corresponding excerpts share similar semantics and are
differentiable from each others. We experimentally demonstrate the relevance of
our approach on text classification and multi-sentiment analysis tasks.@en
- sl:arxiv_title : EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction@en
- sl:arxiv_updated : 2019-09-27T14:16:30Z
- sl:bookmarkOf : https://arxiv.org/abs/1905.11852
- sl:creationDate : 2019-12-05
- sl:creationTime : 2019-12-05T15:03:48Z
- sl:mainDoc : http://www.semanlink.net/doc/2019/12/journee_commune_afia_aria_2
- sl:relatedDoc : http://www.semanlink.net/doc/2019/12/unsupervised_learning_with_text
Documents with similar tags (experimental)