Improving Interpretability for Computer-aided Diagnosis tools on Whole Slide Imaging with Multiple Instance Learning and Gradient-based Explanations

Autor: Antoine Pirovano, Hippolyte Heuberger, Saïd Ladjal, Sylvain Berlemont, Isabelle Bloch
Přispěvatelé: Image, Modélisation, Analyse, GEométrie, Synthèse (IMAGES), Laboratoire Traitement et Communication de l'Information (LTCI), Institut Mines-Télécom [Paris] (IMT)-Télécom Paris-Institut Mines-Télécom [Paris] (IMT)-Télécom Paris, Institut Mines-Télécom [Paris] (IMT)-Télécom Paris, Bloch, Isabelle
Jazyk: angličtina
Rok vydání: 2020
Předmět:
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI]
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Context (language use)
010501 environmental sciences
Machine learning
computer.software_genre
01 natural sciences
Field (computer science)
Machine Learning (cs.LG)
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]
03 medical and health sciences
Discriminative model
Feature (machine learning)
ComputingMilieux_MISCELLANEOUS
030304 developmental biology
0105 earth and related environmental sciences
Interpretability
0303 health sciences
business.industry
Deep learning
3. Good health
Visualization
Computer-aided diagnosis
Artificial intelligence
business
computer
Zdroj: Workshop iMIMIC at MICCAI
Workshop iMIMIC at MICCAI, 2020, Lima, Peru. pp.43-53
Interpretable and Annotation-Efficient Learning for Medical Image Computing ISBN: 9783030611651
iMIMIC/MIL3iD/LABELS@MICCAI
Popis: Deep learning methods are widely used for medical applications to assist medical doctors in their daily routines. While performances reach expert's level, interpretability (highlight how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification. We formalize the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. We aim at explaining how the decision is made based on tile level scoring, how these tile scores are decided and which features are used and relevant for the task. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances by more than 29% for AUC.
Comment: 8 pages (references excluded), 3 figures, presented in iMIMIC Workshop at MICCAI 2020
Databáze: OpenAIRE