Automatic Detection of Post-Operative Clips in Mammography Using a U-Net Convolutional Neural Network.

Autor: Schnitzler T; Institute of Radiology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland., Ruppert C; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland., Hejduk P; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland., Borkowski K; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland., Kajüter J; Institute of Diagnostic and Interventional Radiology, University Hospital Basel, 4031 Basel, Switzerland., Rossi C; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland., Ciritsis A; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland., Landsmann A; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland., Zaytoun H; Institute of Radiology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland., Boss A; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland., Schindera S; Institute of Radiology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland., Burn F; Institute of Radiology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland.
Jazyk: angličtina
Zdroj: Journal of imaging [J Imaging] 2024 Jun 19; Vol. 10 (6). Date of Electronic Publication: 2024 Jun 19.
DOI: 10.3390/jimaging10060147
Abstrakt: Background: After breast conserving surgery (BCS), surgical clips indicate the tumor bed and, thereby, the most probable area for tumor relapse. The aim of this study was to investigate whether a U-Net-based deep convolutional neural network (dCNN) may be used to detect surgical clips in follow-up mammograms after BCS.
Methods: 884 mammograms and 517 tomosynthetic images depicting surgical clips and calcifications were manually segmented and classified. A U-Net-based segmentation network was trained with 922 images and validated with 394 images. An external test dataset consisting of 39 images was annotated by two radiologists with up to 7 years of experience in breast imaging. The network's performance was compared to that of human readers using accuracy and interrater agreement (Cohen's Kappa).
Results: The overall classification accuracy on the validation set after 45 epochs ranged between 88.2% and 92.6%, indicating that the model's performance is comparable to the decisions of a human reader. In 17.4% of cases, calcifications have been misclassified as post-operative clips. The interrater reliability of the model compared to the radiologists showed substantial agreement (κ reader1 = 0.72, κ reader2 = 0.78) while the readers compared to each other revealed a Cohen's Kappa of 0.84, thus showing near-perfect agreement.
Conclusions: With this study, we show that surgery clips can adequately be identified by an AI technique. A potential application of the proposed technique is patient triage as well as the automatic exclusion of post-operative cases from PGMI (Perfect, Good, Moderate, Inadequate) evaluation, thus improving the quality management workflow.
Databáze: MEDLINE