LEAFAGE: Example-based and Feature importance-based Explanations for Black-box ML models
Autor: | Ajaya Adhikari, David M. J. Tax, Riccardo Satta, Matthias Faeth |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
Example based
Computer science media_common.quotation_subject Fidelity 02 engineering and technology Scientific literature State of the art Field (computer science) Data modeling Empirical research 020204 information systems Machine learning 0202 electrical engineering electronic engineering information engineering Feature (machine learning) media_common Black box (phreaking) Problem solving Example-based reasoning business.industry Single decision Cognition Transparency (human–computer interaction) Fuzzy systems Extraction method Model approximations Empirical study Empirical studies EXplainable AI Machine learning models 020201 artificial intelligence & image processing Artificial intelligence business |
Zdroj: | IEEE International Conference on Fuzzy Systems, 2019 IEEE International Conference on Fuzzy Systems, FUZZ 2019, 23 June 2019 through 26 June 2019 FUZZ-IEEE |
Popis: | Explainable Artificial Intelligence (XAI) is an emergent research field which tries to cope with the lack of transparency of AI systems, by providing human understandable explanations for the underlying Machine Learning models. This work presents a new explanation extraction method called LEAFAGE. Explanations are provided both in terms of feature importance and of similar classification examples. The latter is a well known strategy for problem solving and justification in social science. LEAFAGE leverages on the fact that the reasoning behind a single decision/prediction for a single data point is generally simpler to understand than the complete model; it produces explanations by generating simpler yet locally accurate approximations of the original model. LEAFAGE performs overall better than the current state of the art in terms of fidelity of the model approximation, in particular when Machine Learning models with non-linear decision boundaries are analysed. LEAFAGE was also tested in terms of usefulness for the user, an aspect still largely overlooked in the scientific literature. Results show interesting and partly counter-intuitive findings, such as the fact that providing no explanation is sometimes better than providing certain kinds of explanation. © 2019 IEEE. |
Databáze: | OpenAIRE |
Externí odkaz: |