Zobrazeno 1 - 10
of 7 507
pro vyhledávání: '"WU, Ji"'
In the realm of large-scale point cloud registration, designing a compact symbolic representation is crucial for efficiently processing vast amounts of data, ensuring registration robustness against significant viewpoint variations and occlusions. Th
Externí odkaz:
http://arxiv.org/abs/2412.02998
Autor:
Li, Di, Yuan, Mao, Wu, Lin, Yan, Jingye, Lv, Xuning, Tsai, Chao-Wei, Wang, Pei, Zhu, WeiWei, Deng, Li, Lan, Ailan, Xu, Renxin, Chen, Xianglei, Meng, Lingqi, Li, Jian, Li, Xiangdong, Zhou, Ping, Yang, Haoran, Xue, Mengyao, Lu, Jiguang, Miao, Chenchen, Wang, Weiyang, Niu, Jiarui, Fang, Ziyao, Fu, Qiuyang, Feng, Yi, Zhang, Peijin, Jiang, Jinchen, Miao, Xueli, Chen, Yu, Sun, Lingchen, Yang, Yang, Deng, Xiang, Dai, Shi, Chen, Xue, Yao, Jumei, Liu, Yujie, Li, Changheng, Zhang, Minglu, Yang, Yiwen, Zhou, Yucheng, Yiyizhou, Zhang, Yongkun, Niu, Chenhui, Zhao, Rushuang, Zhang, Lei, Peng, Bo, Wu, Ji, Wang, Chi
Long-period radio transients (LPTs) are a newly discovered class of radio emitters with yet incomprehensibly long rotation periods, ranging from minutes to hours. The astrophysical nature of their isolated counterparts remains undetermined. We report
Externí odkaz:
http://arxiv.org/abs/2411.15739
The analysis of 3D medical images is crucial for modern healthcare, yet traditional task-specific models are becoming increasingly inadequate due to limited generalizability across diverse clinical scenarios. Multimodal large language models (MLLMs)
Externí odkaz:
http://arxiv.org/abs/2411.12783
Causal language models acquire vast amount of knowledge from general text corpus during pretraining, but the efficiency of knowledge learning is known to be unsatisfactory, especially when learning from knowledge-dense and small-sized corpora. The de
Externí odkaz:
http://arxiv.org/abs/2409.17954
Multi-modal large language models (MLLMs) have shown impressive capabilities as a general-purpose interface for various visual and linguistic tasks. However, building a unified MLLM for multi-task learning in the medical field remains a thorny challe
Externí odkaz:
http://arxiv.org/abs/2409.17508
Mastering medical knowledge is crucial for medical-specific LLMs. However, despite the existence of medical benchmarks like MedQA, a unified framework that fully leverages existing knowledge bases to evaluate LLMs' mastery of medical knowledge is sti
Externí odkaz:
http://arxiv.org/abs/2409.14302
Pretrained language models can encode a large amount of knowledge and utilize it for various reasoning tasks, yet they can still struggle to learn novel factual knowledge effectively from finetuning on limited textual demonstrations. In this work, we
Externí odkaz:
http://arxiv.org/abs/2409.14057
Autor:
Liu, Wentao, Pan, Qianjun, Zhang, Yi, Liu, Zhuo, Wu, Ji, Zhou, Jie, Zhou, Aimin, Chen, Qin, Jiang, Bo, He, Liang
Large language models (LLMs) have obtained promising results in mathematical reasoning, which is a foundational skill for human intelligence. Most previous studies focus on improving and measuring the performance of LLMs based on textual math reasoni
Externí odkaz:
http://arxiv.org/abs/2409.02834
Autor:
Wu, Ji-Jia
This paper introduces the Efficient Facial Landmark Detection (EFLD) model, specifically designed for edge devices confronted with the challenges related to power consumption and time latency. EFLD features a lightweight backbone and a flexible detec
Externí odkaz:
http://arxiv.org/abs/2407.10228
The early detection of suicide risk is important since it enables the intervention to prevent potential suicide attempts. This paper studies the automatic detection of suicide risk based on spontaneous speech from adolescents, and collects a Mandarin
Externí odkaz:
http://arxiv.org/abs/2406.03882