Zobrazeno 1 - 10
of 69
pro vyhledávání: '"Cheng, Junlong"'
Autor:
Cheng, Junlong, Fu, Bin, Ye, Jin, Wang, Guoan, Li, Tianbin, Wang, Haoyu, Li, Ruoyu, Yao, He, Chen, Junren, Li, JingWen, Su, Yanzhou, Zhu, Min, He, Junjun
Interactive Medical Image Segmentation (IMIS) has long been constrained by the limited availability of large-scale, diverse, and densely annotated datasets, which hinders model generalization and consistent evaluation across different models. In this
Externí odkaz:
http://arxiv.org/abs/2411.12814
Autor:
Wang, Guoan, Ye, Jin, Cheng, Junlong, Li, Tianbin, Chen, Zhaolin, Cai, Jianfei, He, Junjun, Zhuang, Bohan
Publikováno v:
MICCAI 2024
Volumetric medical image segmentation is pivotal in enhancing disease diagnosis, treatment planning, and advancing medical research. While existing volumetric foundation models for medical image segmentation, such as SAM-Med3D and SegVol, have shown
Externí odkaz:
http://arxiv.org/abs/2407.04938
Autor:
Ye, Jin, Cheng, Junlong, Chen, Jianpin, Deng, Zhongying, Li, Tianbin, Wang, Haoyu, Su, Yanzhou, Huang, Ziyan, Chen, Jilong, Jiang, Lei, Sun, Hui, Zhu, Min, Zhang, Shaoting, He, Junjun, Qiao, Yu
Segment Anything Model (SAM) has achieved impressive results for natural image segmentation with input prompts such as points and bounding boxes. Its success largely owes to massive labeled training data. However, directly applying SAM to medical ima
Externí odkaz:
http://arxiv.org/abs/2311.11969
Autor:
Wang, Haoyu, Guo, Sizheng, Ye, Jin, Deng, Zhongying, Cheng, Junlong, Li, Tianbin, Chen, Jianpin, Su, Yanzhou, Huang, Ziyan, Shen, Yiqing, Fu, Bin, Zhang, Shaoting, He, Junjun, Qiao, Yu
Existing volumetric medical image segmentation models are typically task-specific, excelling at specific target but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this pap
Externí odkaz:
http://arxiv.org/abs/2310.15161
Autor:
Huang, Ziyan, Deng, Zhongying, Ye, Jin, Wang, Haoyu, Su, Yanzhou, Li, Tianbin, Sun, Hui, Cheng, Junlong, Chen, Jianpin, He, Junjun, Gu, Yun, Zhang, Shaoting, Gu, Lixu, Qiao, Yu
Although deep learning have revolutionized abdominal multi-organ segmentation, models often struggle with generalization due to training on small, specific datasets. With the recent emergence of large-scale datasets, some important questions arise: \
Externí odkaz:
http://arxiv.org/abs/2309.03906
Autor:
Cheng, Junlong, Ye, Jin, Deng, Zhongying, Chen, Jianpin, Li, Tianbin, Wang, Haoyu, Su, Yanzhou, Huang, Ziyan, Chen, Jilong, Jiang, Lei, Sun, Hui, He, Junjun, Zhang, Shaoting, Zhu, Min, Qiao, Yu
The Segment Anything Model (SAM) represents a state-of-the-art research advancement in natural image segmentation, achieving impressive results with input prompts such as points and bounding boxes. However, our evaluation and recent research indicate
Externí odkaz:
http://arxiv.org/abs/2308.16184
Recently, U-shaped networks have dominated the field of medical image segmentation due to their simple and easily tuned structure. However, existing U-shaped segmentation networks: 1) mostly focus on designing complex self-attention modules to compen
Externí odkaz:
http://arxiv.org/abs/2307.02953
In recent years, segmentation methods based on deep convolutional neural networks (CNNs) have made state-of-the-art achievements for many medical analysis tasks. However, most of these approaches improve performance by optimizing the structure or add
Externí odkaz:
http://arxiv.org/abs/2110.14484
Autor:
Ming, Zhangqiang, Zhu, Min, Wang, Xiangkun, Zhu, Jiamin, Cheng, Junlong, Gao, Chengrui, Yang, Yong, Wei, Xiaoyong
In recent years, with the increasing demand for public safety and the rapid development of intelligent surveillance networks, person re-identification (Re-ID) has become one of the hot research topics in the computer vision field. The main research g
Externí odkaz:
http://arxiv.org/abs/2110.04764
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.