Fusion deep learning approach combining diffuse optical tomography and ultrasound for improving breast cancer classification.

Autor: Zhang M; Electrical and System Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA., Xue M; Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA., Li S; Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA., Zou Y; Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA., Zhu Q; Electrical and System Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA.; Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA.
Jazyk: angličtina
Zdroj: Biomedical optics express [Biomed Opt Express] 2023 Mar 27; Vol. 14 (4), pp. 1636-1646. Date of Electronic Publication: 2023 Mar 27 (Print Publication: 2023).
DOI: 10.1364/BOE.486292
Abstrakt: Diffuse optical tomography (DOT) is a promising technique that provides functional information related to tumor angiogenesis. However, reconstructing the DOT function map of a breast lesion is an ill-posed and underdetermined inverse process. A co-registered ultrasound (US) system that provides structural information about the breast lesion can improve the localization and accuracy of DOT reconstruction. Additionally, the well-known US characteristics of benign and malignant breast lesions can further improve cancer diagnosis based on DOT alone. Inspired by a fusion model deep learning approach, we combined US features extracted by a modified VGG-11 network with images reconstructed from a DOT deep learning auto-encoder-based model to form a new neural network for breast cancer diagnosis. The combined neural network model was trained with simulation data and fine-tuned with clinical data: it achieved an AUC of 0.931 (95% CI: 0.919-0.943), superior to those achieved using US images alone (0.860) or DOT images alone (0.842).
Competing Interests: The authors declare that there are no conflicts of interest related to this article.
(© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement.)
Databáze: MEDLINE