Rationalization for explainable NLP: a survey

Autor: Sai Gurrapu, Ajay Kulkarni, Lifu Huang, Ismini Lourentzou, Feras A. Batarseh
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: Frontiers in Artificial Intelligence, Vol 6 (2023)
Druh dokumentu: article
ISSN: 2624-8212
DOI: 10.3389/frai.2023.1225093
Popis: Recent advances in deep learning have improved the performance of many Natural Language Processing (NLP) tasks such as translation, question-answering, and text classification. However, this improvement comes at the expense of model explainability. Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output. Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge. These factors led rationalization to emerge as a more accessible explainable technique in NLP. Rationalization justifies a model's output by providing a natural language explanation (rationale). Recent improvements in natural language generation have made rationalization an attractive technique because it is intuitive, human-comprehensible, and accessible to non-technical users. Since rationalization is a relatively new field, it is disorganized. As the first survey, rationalization literature in NLP from 2007 to 2022 is analyzed. This survey presents available methods, explainable evaluations, code, and datasets used across various NLP tasks that use rationalization. Further, a new subfield in Explainable AI (XAI), namely, Rational AI (RAI), is introduced to advance the current state of rationalization. A discussion on observed insights, challenges, and future directions is provided to point to promising research opportunities.
Databáze: Directory of Open Access Journals