Autor: |
Li, Wei, Liu, Tianyu, Feng, Feiyan, Yu, Shengpeng, Wang, Hong, Sun, Yanshen |
Zdroj: |
IEEE Journal of Biomedical and Health Informatics; December 2024, Vol. 28 Issue: 12 p7322-7331, 10p |
Abstrakt: |
Early detection significantly enhances patients' survival rates by identifying tumors in their initial stages through medical imaging. However, prevailing methodologies encounter challenges in extracting comprehensive information from diverse modalities, thereby exacerbating semantic disparities and overlooking critical task correlations, consequently compromising the accuracy of prognosis predictions. Moreover, clinical insights emphasize the advantageous sharing of parameters between tumor segmentation and survival prediction for enhanced prognostic accuracy. This paper proposes a novel model, BTSSPro, designed to concurrently address Breast cancer Tumor Segmentation and Survival prediction through a Prompt-guided multi-modal co-learning framework. Technologically, our approach involves the extraction of tumor-specific discriminative features utilizing shared dual attention (SDA) blocks, which amalgamate spatial and channel information from breast MR images. Subsequently, we employ a guided fusion module (GFM) to seamlessly integrate the Electronic Health Record (EHR) vector into the extracted tumor-related discriminative feature representations. This integration prompts the model's feature selection to align more closely with real-world scenarios. Finally, a feature harmonic unit (FHU) is introduced to coordinate the transformer encoder and CNN decoder, thus reducing semantic differences. Remarkably, BTSSPro achieved a C-index of 0.968 and Dice score of 0.715 on the Breast MRI-NACT-Pilot dataset and a C-index of 0.807 and Dice score of 0.791 on the ISPY1 dataset, surpassing the previous state-of-the-art methods. |
Databáze: |
Supplemental Index |
Externí odkaz: |
|