Evaluating Explainability Methods Intended for Multiple Stakeholders.

Autor: Martin, Kyle, Liret, Anne, Wiratunga, Nirmalie, Owusu, Gilbert, Kern, Mathias
Zdroj: KI: Künstliche Intelligenz; Nov2021, Vol. 35 Issue 3/4, p397-411, 15p
Abstrakt: Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index