Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining

Autor: Liu, Jiarun, Yang, Hao, Zhou, Hong-Yu, Xi, Yan, Yu, Lequan, Yu, Yizhou, Liang, Yong, Shi, Guangming, Zhang, Shaoting, Zheng, Hairong, Wang, Shanshan
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Accurate medical image segmentation demands the integration of multi-scale information, spanning from local features to global dependencies. However, it is challenging for existing methods to model long-range global information, where convolutional neural networks (CNNs) are constrained by their local receptive fields, and vision transformers (ViTs) suffer from high quadratic complexity of their attention mechanism. Recently, Mamba-based models have gained great attention for their impressive ability in long sequence modeling. Several studies have demonstrated that these models can outperform popular vision models in various tasks, offering higher accuracy, lower memory consumption, and less computational burden. However, existing Mamba-based models are mostly trained from scratch and do not explore the power of pretraining, which has been proven to be quite effective for data-efficient medical image analysis. This paper introduces a novel Mamba-based model, Swin-UMamba, designed specifically for medical image segmentation tasks, leveraging the advantages of ImageNet-based pretraining. Our experimental results reveal the vital role of ImageNet-based training in enhancing the performance of Mamba-based models. Swin-UMamba demonstrates superior performance with a large margin compared to CNNs, ViTs, and latest Mamba-based models. Notably, on AbdomenMRI, Encoscopy, and Microscopy datasets, Swin-UMamba outperforms its closest counterpart U-Mamba_Enc by an average score of 2.72%.
Comment: Code and models of Swin-UMamba are publicly available at: https://github.com/JiarunLiu/Swin-UMamba
Databáze: arXiv