A framework for falsifiable explanations of machine learning models with an application in computational pathology
Autor: | C. Sternemann, Hendrik Juette, Anna-Lena Kraeft, Celine Lugnier, Axel Mosig, Schuchmacher D, Anke Reinacher-Schick, F. Grosserueschkamp, Schoerner S, Claus Kuepper, Klaus Gerwert, Andrea Tannapfel |
---|---|
Rok vydání: | 2022 |
Předmět: |
Microscopy
Artificial neural network Radiological and Ultrasound Technology business.industry Inductive bias Computer science Deep learning Health Informatics Transparency (human–computer interaction) Machine learning computer.software_genre Computer Graphics and Computer-Aided Design Machine Learning Neoplasms Key (cryptography) Falsifiability Humans Radiology Nuclear Medicine and imaging Neural Networks Computer Artificial intelligence Computer Vision and Pattern Recognition Medical diagnosis business Infrared microscopy computer |
Zdroj: | Medical Image Analysis. 82:102594 |
ISSN: | 1361-8415 |
Popis: | In recent years, deep learning has been the key driver of breakthrough developments in computational pathology and other image based approaches that support medical diagnosis and treatment. The underlying neural networks as inherent black boxes lack transparency, and are often accompanied by approaches to explain their output. However, formally defining explainability has been a notorious unsolved riddle. Here, we introduce a hypothesis-based framework for falsifiable explanations of machine learning models. A falsifiable explanation is a hypothesis that connects an intermediate space induced by the model with the sample from which the data originate. We instantiate this framework in a computational pathology setting using label-free infrared microscopy. The intermediate space is an activation map, which is trained with an inductive bias to localize tumor. An explanation is constituted by hypothesizing that activation corresponds to tumor and associated structures, which we validate by histological staining as an independent secondary experiment. |
Databáze: | OpenAIRE |
Externí odkaz: |