Comparing Interpretable AI Approaches for the Clinical Environment: an Application to COVID-19

Autor: Mohsen Abbaspour Onari, Marco S. Nobile, Isel Grau, Caro Fuchs, Yingqian Zhang, Arjen-Kars Boer, Volkher Scharnhorst
Přispěvatelé: Information Systems IE&IS, EAISI Foundational, EAISI Health, EAISI High Tech Systems, Chemical Biology, Eindhoven MedTech Innovation Center
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: 2022 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB 2022, 1-8
STARTPAGE=1;ENDPAGE=8;TITLE=2022 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB 2022
Popis: Machine Learning (ML) models play an important role in healthcare thanks to their remarkable performance in predicting complex phenomena. During the COVID-19 pandemic, different ML models were implemented to support decisions in the medical settings. However, clinical experts need to ensure that these models are valid, provide clinically useful information, and are implemented and used correctly. In this vein, they need to understand the logic behind the models to be able to trust them. Hence, developing transparent and interpretable models has increasing relevance. In this work, we applied four interpretable ML models including logistic regression, decision tree, pyFUME, and RIPPER to classify suspected COVID-19 patients based on clinical data collected from blood samples. After preprocessing the data set and training the models, we evaluate the models based on their predictive performance. Then, we illustrate that interpretability can be achieved in different ways. First, SHAP explanations are built from logistic regression and decision trees to obtain the features' importance. Then, the potential of pyFUME and RIPPER in providing inherent interpretability are reflected. Finally, potential ways to achieve trust in future studies are briefly discussed.
Databáze: OpenAIRE