BEExAI: Benchmark to Evaluate Explainable AI

Autor: Sithakoul, Samuel, Meftah, Sara, Feutry, Clément
Rok vydání: 2024
Předmět:
Zdroj: World Conference on Explainable Artificial Intelligence, 2024
Druh dokumentu: Working Paper
Popis: Recent research in explainability has given rise to numerous post-hoc attribution methods aimed at enhancing our comprehension of the outputs of black-box machine learning models. However, evaluating the quality of explanations lacks a cohesive approach and a consensus on the methodology for deriving quantitative metrics that gauge the efficacy of explainability post-hoc attribution methods. Furthermore, with the development of increasingly complex deep learning models for diverse data applications, the need for a reliable way of measuring the quality and correctness of explanations is becoming critical. We address this by proposing BEExAI, a benchmark tool that allows large-scale comparison of different post-hoc XAI methods, employing a set of selected evaluation metrics.
Databáze: arXiv