From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein’s Theory of Explanation
Autor: | Francesco Sovrano, Fabio Vitali |
---|---|
Přispěvatelé: | Sovrano, Francesco, Vitali, Fabio |
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
040101 forestry Computer Science - Artificial Intelligence Process (engineering) Computer science 05 social sciences Computer Science - Human-Computer Interaction 050301 education 04 agricultural and veterinary sciences Pipeline (software) Human-Computer Interaction (cs.HC) Artificial Intelligence (cs.AI) Human–computer interaction Question answering 0401 agriculture forestry and fisheries Philosophical theory IBM Presentation logic User interface 0503 education Methods for explanations Education and learning-related technologies ExplanatorY Artificial Intelligence (YAI) Natural language |
Popis: | We propose a new method for explanations in Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. In order to bridge the gap between philosophy and human-computer interfaces, we show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily. Among the mainstream philosophical theories of explanation we identified one that in our view is more easily applicable as a practical model for user-centric tools: Achinstein's Theory of Explanation. With this work we aim to prove that the theory proposed by Achinstein can be actually adapted for being implemented into a concrete software application, as an interactive process answering questions. To this end we found a way to handle the generic (archetypal) questions that implicitly characterise an explanatory processes as preliminary overviews rather than as answers to explicit questions, as commonly understood. To show the expressive power of this approach we designed and implemented a pipeline of AI algorithms for the generation of interactive explanations under the form of overviews, focusing on this aspect of explanations rather than on existing interfaces and presentation logic layers for question answering. We tested our hypothesis on a well-known XAI-powered credit approval system by IBM, comparing CEM, a static explanatory tool for post-hoc explanations, with an extension we developed adding interactive explanations based on our model. The results of the user study, involving more than 100 participants, showed that our proposed solution produced a statistically relevant improvement on effectiveness (U=931.0, p=0.036) over the baseline, thus giving evidence in favour of our theory. |
Databáze: | OpenAIRE |
Externí odkaz: |