Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies
Autor: | Mark T. Keane, Eoin M. Kenny, Molly S. Quinn, Courtney Ford |
---|---|
Rok vydání: | 2021 |
Předmět: |
Black box (phreaking)
Linguistics and Language Correctness Computer science business.industry Deep learning 02 engineering and technology Semantic reasoner computer.software_genre Convolutional neural network Language and Linguistics Test (assessment) Artificial Intelligence 020204 information systems 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business Proxy (statistics) computer Natural language processing MNIST database |
Zdroj: | Artificial Intelligence. 294:103459 |
ISSN: | 0004-3702 |
Popis: | In this paper, we describe a post-hoc explanation-by-example approach to eXplainable AI (XAI), where a black-box, deep learning system is explained by reference to a more transparent, proxy model (in this situation a case-based reasoner), based on a feature-weighting analysis of the former that is used to find explanatory cases from the latter (as one instance of the so-called Twin Systems approach). A novel method (COLE-HP) for extracting the feature-weights from black-box models is demonstrated for a convolutional neural network (CNN) applied to the MNIST dataset; in which extracted feature-weights are used to find explanatory, nearest-neighbours for test instances. Three user studies are reported examining people's judgements of right and wrong classifications made by this XAI twin-system, in the presence/absence of explanations-by-example and different error-rates (from 3-60%). The judgements gathered include item-level evaluations of both correctness and reasonableness, and system-level evaluations of trust, satisfaction, correctness, and reasonableness. Several proposals are made about the user's mental model in these tasks and how it is impacted by explanations at an item- and system-level. The wider lessons from this work for XAI and its user studies are reviewed. |
Databáze: | OpenAIRE |
Externí odkaz: |