About This Document
- sl:arxiv_author :
- sl:arxiv_firstAuthor : Marina Danilevsky
- sl:arxiv_num : 2010.00711
- sl:arxiv_published : 2020-10-01T22:33:21Z
- sl:arxiv_summary : Recent years have seen important advances in the quality of state-of-the-art
models, but this has come at the expense of models becoming less interpretable.
This survey presents an overview of the current state of Explainable AI (XAI),
considered within the domain of Natural Language Processing (NLP). We discuss
the main categorization of explanations, as well as the various ways
explanations can be arrived at and visualized. We detail the operations and
explainability techniques currently available for generating explanations for
NLP model predictions, to serve as a resource for model developers in the
community. Finally, we point out the current gaps and encourage directions for
future work in this important research area.@en
- sl:arxiv_title : A Survey of the State of Explainable AI for Natural Language Processing@en
- sl:arxiv_updated : 2020-10-01T22:33:21Z
- sl:bookmarkOf : https://arxiv.org/abs/2010.00711
- sl:creationDate : 2022-09-08
- sl:creationTime : 2022-09-08T09:30:14Z
Documents with similar tags (experimental)