Autor: |
Shen L; Icahn School of Medicine at Mount Sinai (ISMMS), Department of Neuroscience, New York, 10029, USA. li.shen@mssm.edu., Margolies LR; ISMMS, Department of Diagnostic, Molecular, and Interventional Radiology, New York, 10029, USA., Rothstein JH; ISMMS, Department of Population Health Science and Policy and Department of Genetics and Genomic Sciences, New York, 10029, USA., Fluder E; ISMMS, Department of Scientific Computing, New York, 10029, USA., McBride R; ISMMS, Department of Pathology, New York, 10029, USA., Sieh W; ISMMS, Department of Population Health Science and Policy and Department of Genetics and Genomic Sciences, New York, 10029, USA. |
Abstrakt: |
The rapid development of deep learning, a family of machine learning techniques, has spurred much interest in its application to medical imaging problems. Here, we develop a deep learning algorithm that can accurately detect breast cancer on screening mammograms using an "end-to-end" training approach that efficiently leverages training datasets with either complete clinical annotation or only the cancer status (label) of the whole image. In this approach, lesion annotations are required only in the initial training stage, and subsequent stages require only image-level labels, eliminating the reliance on rarely available lesion annotations. Our all convolutional network method for classifying screening mammograms attained excellent performance in comparison with previous methods. On an independent test set of digitized film mammograms from the Digital Database for Screening Mammography (CBIS-DDSM), the best single model achieved a per-image AUC of 0.88, and four-model averaging improved the AUC to 0.91 (sensitivity: 86.1%, specificity: 80.1%). On an independent test set of full-field digital mammography (FFDM) images from the INbreast database, the best single model achieved a per-image AUC of 0.95, and four-model averaging improved the AUC to 0.98 (sensitivity: 86.7%, specificity: 96.1%). We also demonstrate that a whole image classifier trained using our end-to-end approach on the CBIS-DDSM digitized film mammograms can be transferred to INbreast FFDM images using only a subset of the INbreast data for fine-tuning and without further reliance on the availability of lesion annotations. These findings show that automatic deep learning methods can be readily trained to attain high accuracy on heterogeneous mammography platforms, and hold tremendous promise for improving clinical tools to reduce false positive and false negative screening mammography results. Code and model available at: https://github.com/lishen/end2end-all-conv . |