Abstrakt: |
Objective: To develop a deep‐learning‐based multi‐task (DMT) model for joint tumor enlargement prediction (TEP) and automatic tumor segmentation (TS) for vestibular schwannoma (VS) patients using their initial diagnostic contrast‐enhanced T1‐weighted (ceT1) magnetic resonance images (MRIs). Methods: Initial ceT1 MRIs for VS patients meeting the inclusion/exclusion criteria of this study were retrospectively collected. VSs on the initial MRIs and their first follow‐up scans were manually contoured. Tumor volume and enlargement ratio were measured based on expert contours. A DMT model was constructed for jointly TS and TEP. The manually segmented VS volume on the initial scan and the tumor enlargement label (≥20% volumetric growth) were used as the ground truth for training and evaluating the TS and TEP modules, respectively. Results: We performed 5‐fold cross‐validation with the eligible patients (n = 103). Median segmentation dice coefficient, prediction sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were measured and achieved the following values: 84.20%, 0.68, 0.78, 0.72, and 0.77, respectively. The segmentation result is significantly better than the separate TS network (dice coefficient of 83.13%, p = 0.03) and marginally lower than the state‐of‐the‐art segmentation model nnU‐Net (dice coefficient of 86.45%, p = 0.16). The TEP performance is significantly better than the single‐task prediction model (AUC = 0.60, p = 0.01) and marginally better than a radiomics‐based prediction model (AUC = 0.70, p = 0.17). Conclusion: The proposed DMT model is of higher learning efficiency and achieves promising performance on TEP and TS. The proposed technology has the potential to improve VS patient management. Level of Evidence: NA Laryngoscope, 133:2754–2760, 2023 [ABSTRACT FROM AUTHOR] |