Zobrazeno 1 - 10
of 15 577
pro vyhledávání: '"XIAO, Liang"'
Autor:
Jin, Ao-Qun, Xiang, Tian-Yu, Zhou, Xiao-Hu, Gui, Mei-Jiang, Xie, Xiao-Liang, Liu, Shi-Qi, Wang, Shuang-Yi, Cao, Yue, Duan, Sheng-Bin, Xie, Fu-Chao, Hou, Zeng-Guang
Current robot learning algorithms for acquiring novel skills often rely on demonstration datasets or environment interactions, resulting in high labor costs and potential safety risks. To address these challenges, this study proposes a skill-learning
Externí odkaz:
http://arxiv.org/abs/2412.09286
Existing conditional Denoising Diffusion Probabilistic Models (DDPMs) with a Noise-Conditional Framework (NCF) remain challenging for 3D scene understanding tasks, as the complex geometric details in scenes increase the difficulty of fitting the grad
Externí odkaz:
http://arxiv.org/abs/2411.16308
Autor:
Xiao, Xu, Ding, Jiacheng, Luo, Xiao Lin, Lan, Sun Ke, Xiao, Liang, Liu, Shuai, Wang, Xin, Zhang, Le, Li, Xiao-Dong
In the study of cosmology and galaxy evolution, the peculiar velocity and density field of dark matter (DM) play a crucial role in studying many issues. Here, we propose a UNet-based deep learning to reconstruct the real-space DM velocity field from
Externí odkaz:
http://arxiv.org/abs/2411.11280
Autor:
Huang, De-Xing, Zhou, Xiao-Hu, Gui, Mei-Jiang, Xie, Xiao-Liang, Liu, Shi-Qi, Wang, Shuang-Yi, Li, Hao, Xiang, Tian-Yu, Hou, Zeng-Guang
Iodinated contrast agents are widely utilized in numerous interventional procedures, yet posing substantial health risks to patients. This paper presents CAS-GAN, a novel GAN framework that serves as a "virtual contrast agent" to synthesize X-ray ang
Externí odkaz:
http://arxiv.org/abs/2410.08490
Autor:
Min, Chen, Si, Shubin, Wang, Xu, Xue, Hanzhang, Jiang, Weizhong, Liu, Yang, Wang, Juan, Zhu, Qingtian, Zhu, Qi, Luo, Lun, Kong, Fanjie, Miao, Jinyu, Cai, Xudong, An, Shuai, Li, Wei, Mei, Jilin, Sun, Tong, Zhai, Heng, Liu, Qifeng, Zhao, Fangzhou, Chen, Liang, Wang, Shuai, Shang, Erke, Shang, Linzhi, Zhao, Kunlong, Li, Fuyang, Fu, Hao, Jin, Lei, Zhao, Jian, Mao, Fangyuan, Xiao, Zhipeng, Li, Chengyang, Dai, Bin, Zhao, Dawei, Xiao, Liang, Nie, Yiming, Hu, Yu, Li, Xuelong
Research on autonomous driving in unstructured outdoor environments is less advanced than in structured urban settings due to challenges like environmental diversities and scene complexity. These environments-such as rural areas and rugged terrains-p
Externí odkaz:
http://arxiv.org/abs/2410.07701
A global threshold (e.g., 0.5) is often applied to determine which bounding boxes should be included in the final results for an object detection task. A higher threshold reduces false positives but may result in missing a significant portion of true
Externí odkaz:
http://arxiv.org/abs/2409.16678
Prompt learning represents a promising method for adapting pre-trained vision-language models (VLMs) to various downstream tasks by learning a set of text embeddings. One challenge inherent to these methods is the poor generalization performance due
Externí odkaz:
http://arxiv.org/abs/2407.19674
Autor:
Huang, De-Xing, Zhou, Xiao-Hu, Xie, Xiao-Liang, Liu, Shi-Qi, Wang, Shuang-Yi, Feng, Zhen-Qiu, Gui, Mei-Jiang, Li, Hao, Xiang, Tian-Yu, Yao, Bo-Xian, Hou, Zeng-Guang
Automatic vessel segmentation is paramount for developing next-generation interventional navigation systems. However, current approaches suffer from suboptimal segmentation performances due to significant challenges in intraoperative images (i.e., lo
Externí odkaz:
http://arxiv.org/abs/2406.19749
In this article, we study the Iwasawa theory for cuspidal automorphic representations of $\mathrm{GL}(n)\times\mathrm{GL}(n+1)$ over CM fields along anticyclotomic directions, in the framework of the Gan-Gross-Prasad conjecture for unitary groups. We
Externí odkaz:
http://arxiv.org/abs/2406.00624
Autor:
Min, Chen, Zhao, Dawei, Xiao, Liang, Zhao, Jian, Xu, Xinli, Zhu, Zheng, Jin, Lei, Li, Jianshu, Guo, Yulan, Xing, Junliang, Jing, Liping, Nie, Yiming, Dai, Bin
Vision-centric autonomous driving has recently raised wide attention due to its lower cost. Pre-training is essential for extracting a universal representation. However, current vision-centric pre-training typically relies on either 2D or 3D pre-text
Externí odkaz:
http://arxiv.org/abs/2405.04390