Segmentation of the prostate, its zones, anterior fibromuscular stroma, and urethra on the MRIs and multimodality image fusion using U-Net model.

Autor: Rezaeijo SM; Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran., Jafarpoor Nesheli S; Faculty of Engineering, University of Science and Culture, Tehran, Iran., Fatan Serj M; Department of Computer Science and Mathematics of Security, Rovira i Virgili University, Tarragona, Spain., Tahmasebi Birgani MJ; Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
Jazyk: angličtina
Zdroj: Quantitative imaging in medicine and surgery [Quant Imaging Med Surg] 2022 Oct; Vol. 12 (10), pp. 4786-4804.
DOI: 10.21037/qims-22-115
Abstrakt: Background: Due to the large variability in the prostate gland of different patient groups, manual segmentation is time-consuming and subject to inter-and intra-reader variations. Hence, we propose a U-Net model to automatically segment the prostate and its zones, including the peripheral zone (PZ), transitional zone (TZ), anterior fibromuscular stroma (AFMS), and urethra on the MRI [T2-weighted (T2W), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC)], and multimodality image fusion.
Methods: A total of 91 eligible patients were retrospectively identified; 50 patients were considered for training process in a 10-fold cross-validation fashion and 41 ones for external test. Firstly, images were registered, and cropping was performed through a bounding box. In addition to T2W, DWI, and ADC separately, fused images were used. We considered three combinations, including T2W + DWI, T2W + ADC, and DWI + ADC, using wavelet transform. U-Net was applied to segment the prostate and its zones, AFMS, and urethra in a 10-fold cross-validation fashion. Eventually, dice score (DSC), intersection over union (IoU), precision, recall, and Hausdorff distance (HD) were used to evaluate the proposed model.
Results: Using T2W images alone on the external test images, higher DSC, IoU, precision, and recall was achieved than the individual DWI and ADC images. DSC of 95%, 94%,98%, 94%, and 88%, IoU of 88%, 88.5%, 96%, 90%, and 79%, precision of 95.9%, 93.9%, 97.6%, 93.83%, and 87.82%, and recall of 94.2%, 94.2%, 98.3%, 94%, 87.93% was achieved for the whole prostate, PZ, TZ, urethra, and AFMS, respectively. The results clearly show that the best segmentation was obtained when the model is trained using T2W + DWI images. DSC of 99.06%, 99,05%, 99.04%, 99.09%, and 98.08%, IoU of 97.09%, 97.02%, 98.12%, 98.13%, and 96%, precision of 99.24%, 98.22%, 98.91%, 99.23%, and 98.9%, and recall of 98.3%, 99.8%, 99.02%, 98.93%, and 97.51% was achieved for the whole prostate, PZ, TZ, urethra, and AFMS, respectively. The min of the HD in the testing set for three combinations was 0.29 for the T2W + ADC procedure in the whole prostate class.
Conclusions: Better performance was achieved using T2W + DWI images than T2W, DWI, and ADC separately or T2W + ADC and DWI + ADC in combination.
Competing Interests: Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://qims.amegroups.com/article/view/10.21037/qims-22-115/coif). The authors have no conflicts of interest to declare.
(2022 Quantitative Imaging in Medicine and Surgery. All rights reserved.)
Databáze: MEDLINE