Guidelines and Evaluation of Clinical Explainable AI in Medical Image Analysis
Autor: | Jin, Weina, Li, Xiaoxiao, Fatehi, Mostafa, Hamarneh, Ghassan |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | Medical Image Analysis, 2022 |
Druh dokumentu: | Working Paper |
DOI: | 10.1016/j.media.2022.102684 |
Popis: | Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI. Comment: Code: http://github.com/weinajin/multimodal_explanation, Supplementary Material S1 and S2: https://github.com/weinajin/multimodal_explanation/tree/main/paper |
Databáze: | arXiv |
Externí odkaz: |