Multimodal AI Combining Clinical and Imaging Inputs Improves Prostate Cancer Detection.

Autor: Roest C; From the Department of Radiology, Medical Imaging Center, University Medical Center Groningen, Groningen, the Netherlands (C.R., D.Y., D.I.R.S., S.J.F., T.C.K.); Department of Radiology, Netherlands Cancer Center Antoni van Leeuwenhoek, Amsterdam, the Netherlands (D.Y.); Department of Radiology, Radboud University Medical Center, Nijmegen, the Netherlands (J.S.B., H.H.); and Department of Radiology, Martini Ziekenhuis Groningen, Groningen, the Netherlands (D.B.R.)., Yakar D, Rener Sitar DI, Bosma JS, Rouw DB, Fransen SJ, Huisman H, Kwee TC
Jazyk: angličtina
Zdroj: Investigative radiology [Invest Radiol] 2024 Dec 01; Vol. 59 (12), pp. 854-860. Date of Electronic Publication: 2024 Jul 29.
DOI: 10.1097/RLI.0000000000001102
Abstrakt: Objectives: Deep learning (DL) studies for the detection of clinically significant prostate cancer (csPCa) on magnetic resonance imaging (MRI) often overlook potentially relevant clinical parameters such as prostate-specific antigen, prostate volume, and age. This study explored the integration of clinical parameters and MRI-based DL to enhance diagnostic accuracy for csPCa on MRI.
Materials and Methods: We retrospectively analyzed 932 biparametric prostate MRI examinations performed for suspected csPCa (ISUP ≥2) at 2 institutions. Each MRI scan was automatically analyzed by a previously developed DL model to detect and segment csPCa lesions. Three sets of features were extracted: DL lesion suspicion levels, clinical parameters (prostate-specific antigen, prostate volume, age), and MRI-based lesion volumes for all DL-detected lesions. Six multimodal artificial intelligence (AI) classifiers were trained for each combination of feature sets, employing both early (feature-level) and late (decision-level) information fusion methods. The diagnostic performance of each model was tested internally on 20% of center 1 data and externally on center 2 data (n = 529). Receiver operating characteristic comparisons determined the optimal feature combination and information fusion method and assessed the benefit of multimodal versus unimodal analysis. The optimal model performance was compared with a radiologist using PI-RADS.
Results: Internally, the multimodal AI integrating DL suspicion levels with clinical features via early fusion achieved the highest performance. Externally, it surpassed baselines using clinical parameters (0.77 vs 0.67 area under the curve [AUC], P < 0.001) and DL suspicion levels alone (AUC: 0.77 vs 0.70, P = 0.006). Early fusion outperformed late fusion in external data (0.77 vs 0.73 AUC, P = 0.005). No significant performance gaps were observed between multimodal AI and radiologist assessments (internal: 0.87 vs 0.88 AUC; external: 0.77 vs 0.75 AUC, both P > 0.05).
Conclusions: Multimodal AI (combining DL suspicion levels and clinical parameters) outperforms clinical and MRI-only AI for csPCa detection. Early information fusion enhanced AI robustness in our multicenter setting. Incorporating lesion volumes did not enhance diagnostic efficacy.
Competing Interests: Conflicts of interest and sources of funding: C.R., T.C.K., D.Y., and H.H. are receiving a grant from Siemens Healthineers. H.H. is receiving a grant from Canon Medical Systems. For the remaining authors, none were declared.
(Copyright © 2024 The Author(s). Published by Wolters Kluwer Health, Inc.)
Databáze: MEDLINE