Zobrazeno 1 - 10
of 14 877
pro vyhledávání: '"HUANG, Lei"'
Autor:
Gu, Yuxuan, Wang, Wenjie, Feng, Xiaocheng, Zhong, Weihong, Zhu, Kun, Huang, Lei, Chua, Tat-Seng, Qin, Bing
Large language models (LLMs) have demonstrated impressive instruction following capabilities, while still struggling to accurately manage the length of the generated text, which is a fundamental requirement in many real-world applications. Existing l
Externí odkaz:
http://arxiv.org/abs/2412.14656
Autor:
Ye, Yangfan, Feng, Xiaocheng, Feng, Xiachong, Qin, Libo, Huang, Yichong, Huang, Lei, Ma, Weitao, Zhang, Zhirui, Lu, Yunfei, Yan, Xiaohui, Tang, Duyu, Tu, Dandan, Qin, Bing
Current large language models (LLMs) often exhibit imbalances in multilingual capabilities and cultural adaptability, largely due to their English-centric pretraining data. To address this imbalance, we propose a probing method named XTransplant that
Externí odkaz:
http://arxiv.org/abs/2412.12686
Autor:
Wang, Yuanshuai, Zhang, Xingjian, Zhao, Jinkun, Wen, Siwei, Feng, Peilin, Liao, Shuhao, Huang, Lei, Wu, Wenjun
Large Language Models (LLMs) are key technologies driving intelligent systems to handle multiple tasks. To meet the demands of various tasks, an increasing number of LLMs-driven experts with diverse capabilities have been developed, accompanied by co
Externí odkaz:
http://arxiv.org/abs/2412.04167
This paper introduces the task of Remote Sensing Copy-Move Question Answering (RSCMQA). Unlike traditional Remote Sensing Visual Question Answering (RSVQA), RSCMQA focuses on interpreting complex tampering scenarios and inferring relationships betwee
Externí odkaz:
http://arxiv.org/abs/2412.02575
In the regime of Rydberg electromagnetically induced transparency, we study the correlated behaviors between the transmission spectra of a pair of probe fields passing through respective parallel one-dimensional cold Rydberg ensembles. Due to the van
Externí odkaz:
http://arxiv.org/abs/2411.07726
Autor:
Zhao, Cheng, Huang, Song, He, Mengfan, Montero-Camacho, Paulo, Liu, Yu, Renard, Pablo, Tang, Yunyi, Verdier, Aurelien, Xu, Wenshuo, Yang, Xiaorui, Yu, Jiaxi, Zhang, Yao, Zhao, Siyi, Zhou, Xingchen, He, Shengyu, Kneib, Jean-Paul, Li, Jiayi, Li, Zhuoyang, Wang, Wen-Ting, Xianyu, Zhong-Zhi, Zhang, Yidian, Gsponer, Rafaela, Li, Xiao-Dong, Rocher, Antoine, Zou, Siwei, Tan, Ting, Huang, Zhiqi, Wang, Zhuoxiao, Li, Pei, Rombach, Maxime, Dong, Chenxing, Forero-Sanchez, Daniel, Shan, Huanyuan, Wang, Tao, Li, Yin, Zhai, Zhongxu, Wang, Yuting, Zhao, Gong-Bo, Shi, Yong, Mao, Shude, Huang, Lei, Guo, Liquan, Cai, Zheng
The MUltiplexed Survey Telescope (MUST) is a 6.5-meter telescope under development. Dedicated to highly-multiplexed, wide-field spectroscopic surveys, MUST observes over 20,000 targets simultaneously using 6.2-mm pitch positioning robots within a ~5
Externí odkaz:
http://arxiv.org/abs/2411.07970
Real-world processes involve multiple object types with intricate interrelationships. Traditional event logs (in XES format), which record process execution centred around the case notion, are restricted to a single-object perspective, making it diff
Externí odkaz:
http://arxiv.org/abs/2411.07490
Autor:
Gu, Yuxuan, Feng, Xiaocheng, Huang, Lei, Wu, Yingsheng, Zhou, Zekun, Zhong, Weihong, Zhu, Kun, Qin, Bing
We present an novel framework for efficiently and effectively extending the powerful continuous diffusion processes to discrete modeling. Previous approaches have suffered from the discrepancy between discrete data and continuous modeling. Our study
Externí odkaz:
http://arxiv.org/abs/2410.22380
Autor:
Ying, Zonghao, Liu, Aishan, Liang, Siyuan, Huang, Lei, Guo, Jinyang, Zhou, Wenbo, Liu, Xianglong, Tao, Dacheng
Multimodal Large Language Models (MLLMs) are showing strong safety concerns (e.g., generating harmful outputs for users), which motivates the development of safety evaluation benchmarks. However, we observe that existing safety benchmarks for MLLMs s
Externí odkaz:
http://arxiv.org/abs/2410.18927
Autor:
Huang, Lei, Feng, Xiaocheng, Ma, Weitao, Zhao, Liang, Fan, Yuchun, Zhong, Weihong, Xu, Dongliang, Yang, Qing, Liu, Hongtao, Qin, Bing
Teaching large language models (LLMs) to generate text with citations to evidence sources can mitigate hallucinations and enhance verifiability in information-seeking systems. However, improving this capability requires high-quality attribution data,
Externí odkaz:
http://arxiv.org/abs/2410.13298