Concept-based AI interpretability in physiological time-series data: Example of abnormality detection in electroencephalography.

Autor: Brenner A; Institute of Medical Informatics, University of Münster, Münster, Germany. Electronic address: alexander.brenner@uni-muenster.de., Knispel F; Institute of Medical Informatics, Medical Faculty, RWTH Aachen University, Aachen, Germany., Fischer FP; Department of Epileptology and Neurology, Medical Faculty, RWTH Aachen University Hospital, Aachen, Germany., Rossmanith P; Theoretical Computer Science, Department of Computer Science, RWTH Aachen University, Aachen, Germany., Weber Y; Department of Epileptology and Neurology, Medical Faculty, RWTH Aachen University Hospital, Aachen, Germany., Koch H; Department of Epileptology and Neurology, Medical Faculty, RWTH Aachen University Hospital, Aachen, Germany., Röhrig R; Institute of Medical Informatics, Medical Faculty, RWTH Aachen University, Aachen, Germany., Varghese J; Institute of Medical Informatics, University of Münster, Münster, Germany., Kutafina E; Institute for Biomedical Informatics, Faculty of Medicine, University Hospital Cologne, University of Cologne, Cologne, Germany.
Jazyk: angličtina
Zdroj: Computer methods and programs in biomedicine [Comput Methods Programs Biomed] 2024 Dec; Vol. 257, pp. 108448. Date of Electronic Publication: 2024 Sep 30.
DOI: 10.1016/j.cmpb.2024.108448
Abstrakt: Background and Objective: Despite recent performance advancements, deep learning models are not yet adopted in clinical practice on a wide scale. The intrinsic intransparency of such systems is commonly cited as one major reason for this reluctance. This has motivated methods that aim to provide explanations of model functioning. Known limitations of feature-based explanations have led to an increased interest in concept-based interpretability. Testing with Concept Activation Vectors (TCAV) employs human-understandable, abstract concepts to explain model behavior. The method has previously been applied to the medical domain in the context of electronic health records, retinal fundus images and magnetic resonance imaging.
Methods: We explore the usage of TCAV for building interpretable models on physiological time series, using an example of abnormality detection in electroencephalography (EEG). For this purpose, we adopt the XceptionTime model, which is suitable for multi-channel physiological data of variable sizes. The model provides state-of-the-art performance on raw EEG data and is publically available. We propose and test several ideas regarding concept definition through metadata mining, using additional labeled EEG data and extracting interpretable signal characteristics in the form of frequencies. By including our own hospital data with analog labeling, we further evaluate the robustness of our approach.
Results: The tested concepts show a TCAV score distribution that is in line with the clinical expectations, i.e. concepts known to have strong links with EEG pathologies (such as epileptiform discharges) received higher scores than the neutral concepts (e.g. sex). The scores were consistent across the applied concept generation strategies.
Conclusions: TCAV has the potential to improve interpretability of deep learning applied to multi-channel signals as well as to detect possible biases in the data. Still, further work on developing the strategies for concept definition and validation on clinical physiological time series is needed to better understand how to extract clinically relevant information from the concept sensitivity scores.
Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
(Copyright © 2024 The Author(s). Published by Elsevier B.V. All rights reserved.)
Databáze: MEDLINE