A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation

Autor: Zhuang, Yuzhou, Liu, Hong, Song, Enmin, Xu, Xiangyang, Liao, Yongde, Ye, Guanchao, Hung, Chih-Cheng
Zdroj: IEEE Transactions on Radiation and Plasma Medical Sciences; January 2024, Vol. 8 Issue: 1 p33-52, 20p
Abstrakt: Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.
Databáze: Supplemental Index