Explaining a Deep Learning Based Breast Ultrasound Image Classifier with Saliency Maps.

Autor: Byra M; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland., Dobruch-Sobczak K; Radiology Department II, Maria Sklodowska-Curie National Research Institute of Oncology, Warsaw, Poland., Piotrzkowska-Wroblewska H; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland., Klimonda Z; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland., Litniewski J; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
Jazyk: angličtina
Zdroj: Journal of ultrasonography [J Ultrason] 2022 Apr 27; Vol. 22 (89), pp. 70-75. Date of Electronic Publication: 2022 Apr 27 (Print Publication: 2022).
DOI: 10.15557/JoU.2022.0013
Abstrakt: Aim of the Study: Deep neural networks have achieved good performance in breast mass classification in ultrasound imaging. However, their usage in clinical practice is still limited due to the lack of explainability of decisions conducted by the networks. In this study, to address the explainability problem, we generated saliency maps indicating ultrasound image regions important for the network's classification decisions.
Material and Methods: Ultrasound images were collected from 272 breast masses, including 123 malignant and 149 benign. Transfer learning was applied to develop a deep network for breast mass classification. Next, the class activation mapping technique was used to generate saliency maps for each image. Breast mass images were divided into three regions: the breast mass region, the peritumoral region surrounding the breast mass, and the region below the breast mass. The pointing game metric was used to quantitatively assess the overlap between the saliency maps and the three selected US image regions.
Results: Deep learning classifier achieved the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity of 0.887, 0.835, 0.801, and 0.868, respectively. In the case of the correctly classified test US images, analysis of the saliency maps revealed that the decisions of the network could be associated with the three selected regions in 71% of cases.
Conclusions: Our study is an important step toward better understanding of deep learning models developed for breast mass diagnosis. We demonstrated that the decisions made by the network can be related to the appearance of certain tissue regions in breast mass US images.
Competing Interests: Conflict of interest The authors do not report any financial or personal connections with other persons or organizations which might negatively affect the contents of this publication and/or claim authorship rights to this publication.
(© 2022 Michał Byra, Katarzyna Dobruch-Sobczak, Hanna Piotrzkowska-Wroblewska, Ziemowit Klimonda, Jerzy Litniewski, published by Sciendo.)
Databáze: MEDLINE