Abstrakt: |
Purpose: This study aimed to develop a deep convolutional neural network (DCNN) model to classify molecular subtypes of breast cancer from ultrasound (US) images together with clinical information. Methods: A total of 1,012 breast cancer patients with 2,284 US images (center 1) were collected as the main cohort for training and internal testing. Another cohort of 117 breast cancer cases with 153 US images (center 2) was used as the external testing cohort. Patients were grouped according to thresholds of nodule sizes of 20 mm and age of 50 years. The DCNN models were constructed based on US images and the clinical information to predict the molecular subtypes of breast cancer. A Breast Imaging-Reporting and Data System (BI-RADS) lexicon model was built on the same data based on morphological and clinical description parameters for diagnostic performance comparison. The diagnostic performance was assessed through the accuracy, sensitivity, specificity, Youden's index (YI), and area under the receiver operating characteristic curve (AUC). Results: Our DCNN model achieved better diagnostic performance than the BI-RADS lexicon model in differentiating molecular subtypes of breast cancer in both the main cohort and external testing cohort (all p < 0.001). In the main cohort, when classifying luminal A from non-luminal A subtypes, our model obtained an AUC of 0.776 (95% CI, 0.649-0.885) for patients older than 50 years and 0.818 (95% CI, 0.726-0.902) for those with tumor sizes ≤20 mm. For young patients ≤50 years, the AUC value of our model for detecting triple-negative breast cancer was 0.712 (95% CI, 0.538-0.874). In the external testing cohort, when classifying luminal A from non-luminal A subtypes for patients older than 50 years, our DCNN model achieved an AUC of 0.686 (95% CI, 0.567-0.806). Conclusions: We employed a DCNN model to predict the molecular subtypes of breast cancer based on US images. Our model can be valuable depending on the patient's age and nodule sizes. [ABSTRACT FROM AUTHOR] |