Autor: |
Raghavan, Kaushik, B, Sivaselvan, v, Kamakoti |
Předmět: |
|
Zdroj: |
Multimedia Tools & Applications; Jun2024, Vol. 83 Issue 19, p57551-57578, 28p |
Abstrakt: |
Explainable artificial intelligence (XAI) can help build trust between AI models and healthcare professionals in the context of medical image classification. XAI can help explain the reasoning behind predictions, which can help healthcare professionals understand and trust the AI model. This paper presents a novel, 'attention-guided Grad-CAM,' a class of explainability algorithms that will visually reveal the reasons for prediction in image classification. To implement the proposed methods, we used infrared breast images from the" Database of Mastology Research" First; we built a classifier for detecting breast cancer using an ensemble of three pre-trained networks. Then we implemented an attention-guided Grad-CAM using channel and spatial attention to visualize the critical regions of infrared breast image that will explain the reasons for the predictions. The proposed ensemble of the pre-trained network was able to classify the breast thermograms (Healthy / Tumour) with an accuracy of 98.04% (Precision: 97.22%, Specificity: 97.77%, Sensitivity: 98.21%, F1-Score: 97.49, AUC: 0.97). The proposed Attention guided Grad-CAM method was able distinctively show the hottest regions of the thermograms (tumor regions). The ablation study also showed an average drop in the model's 42.5% when the explanation maps were used instead of the original image. The activation score also increased by 25.35%. The proposed ensemble of pre-trained networks was able to classify the breast thermograms accurately, and the attention-guided Grad-CAM was able to visually explain the AI model's prediction using a heat map. The proposed model will aid in the adoption of AI techniques by healthcare professionals with trust. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|