Autor: |
Zakaria Rguibi, Abdelmajid Hajami, Dya Zitouni, Amine Elqaraoui, Anas Bedraoui |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
Electronics; Volume 11; Issue 11; Pages: 1775 |
ISSN: |
2079-9292 |
DOI: |
10.3390/electronics11111775 |
Popis: |
Deep learning models have been increasingly applied to medical images for tasks such as lesion detection, segmentation, and diagnosis. However, the field suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyse building trust in deep learning models, we will use some techniques to demonstrate many aspects of explaining convolutional neural networks in a medical imaging context. One important factor influencing clinician’s trust is how well a model can justify its predictions or outcomes. Clinicians need understandable explanations about why a machine-learned prediction was made so they can assess whether it is accurate and clinically useful. The provision of appropriate explanations has been generally understood to be critical for establishing trust in deep learning models. However, there lacks a clear understanding on what constitutes an explanation that is both understandable and useful across different domains such as medical image analysis, which hampers efforts towards developing explanatory tool sets specifically tailored towards these tasks. In this paper, we investigated two major directions for explaining convolutional neural networks: feature-based post hoc explanatory methods that try to explain already trained and fixed target models and preliminary analysis and choice of the model architecture with an accuracy of 98% ± 0.156% from 36 CNN architectures with different configurations. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|