One-shot neuroanatomy segmentation through online data augmentation and confidence aware pseudo label.

Autor: Zhang L; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China., Ning G; School of Clinical Medicine, Tsinghua University, Beijing, China., Liang H; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China., Han B; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China., Liao H; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; School of Biomedical Engineering, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China. Electronic address: liao@tsinghua.edu.cn.
Jazyk: angličtina
Zdroj: Medical image analysis [Med Image Anal] 2024 Jul; Vol. 95, pp. 103182. Date of Electronic Publication: 2024 Apr 25.
DOI: 10.1016/j.media.2024.103182
Abstrakt: Recently, deep learning-based brain segmentation methods have achieved great success. However, most approaches focus on supervised segmentation, which requires many high-quality labeled images. In this paper, we pay attention to one-shot segmentation, aiming to learn from one labeled image and a few unlabeled images. We propose an end-to-end unified network that joints deformation modeling and segmentation tasks. Our network consists of a shared encoder, a deformation modeling head, and a segmentation head. In the training phase, the atlas and unlabeled images are input to the encoder to get multi-scale features. The features are then fed to the multi-scale deformation modeling module to estimate the atlas-to-image deformation field. The deformation modeling module implements the estimation at the feature level in a coarse-to-fine manner. Then, we employ the field to generate the augmented image pair through online data augmentation. We do not apply any appearance transformations cause the shared encoder could capture appearance variations. Finally, we adopt supervised segmentation loss for the augmented image. Considering that the unlabeled images still contain rich information, we introduce confidence aware pseudo label for them to further boost the segmentation performance. We validate our network on three benchmark datasets. Experimental results demonstrate that our network significantly outperforms other deep single-atlas-based and traditional multi-atlas-based segmentation methods. Notably, the second dataset is collected from multi-center, and our network still achieves promising segmentation performance on both the seen and unseen test sets, revealing its robustness. The source code will be available at https://github.com/zhangliutong/brainseg.
Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
(Copyright © 2024. Published by Elsevier B.V.)
Databáze: MEDLINE