Zobrazeno 1 - 10
of 54 969
pro vyhledávání: '"ZHANG, Qi"'
Autor:
Muhtar, Dilxat, Shen, Yelong, Yang, Yaming, Liu, Xiaodong, Lu, Yadong, Liu, Jianfeng, Zhan, Yuefeng, Sun, Hao, Deng, Weiwei, Sun, Feng, Zhang, Xueliang, Gao, Jianfeng, Chen, Weizhu, Zhang, Qi
In-context learning (ICL) allows large language models (LLMs) to adapt to new tasks directly from the given demonstrations without requiring gradient updates. While recent advances have expanded context windows to accommodate more demonstrations, thi
Externí odkaz:
http://arxiv.org/abs/2411.09289
Autor:
Chen, Yanting, Ren, Yi, Qin, Xiaoting, Zhang, Jue, Yuan, Kehong, Han, Lu, Lin, Qingwei, Zhang, Dongmei, Rajmohan, Saravan, Zhang, Qi
Video recordings of user activities, particularly desktop recordings, offer a rich source of data for understanding user behaviors and automating processes. However, despite advancements in Vision-Language Models (VLMs) and their increasing use in vi
Externí odkaz:
http://arxiv.org/abs/2411.08768
Recovering the intrinsic physical attributes of a scene from images, generally termed as the inverse rendering problem, has been a central and challenging task in computer vision and computer graphics. In this paper, we present GUS-IR, a novel framew
Externí odkaz:
http://arxiv.org/abs/2411.07478
The heavy meson light-cone distribution amplitude (LCDA), as defined in full QCD, plays a key role in the collinear factorization for exclusive heavy meson production and in lattice computations of the LCDA within heavy-quark effective theory (HQET).
Externí odkaz:
http://arxiv.org/abs/2411.07101
Autor:
Shu, Feng, Jiang, Jinbing, Wang, Xuehui, Yang, Ke, Shen, Chong, Zhang, Qi, Wang, Dongming, Wang, Jiangzhou
Due to its ability of significantly improving data rate, intelligent reflecting surface (IRS) will be a potential crucial technique for the future generation wireless networks like 6G. In this paper, we will focus on the analysis of degree of freedom
Externí odkaz:
http://arxiv.org/abs/2411.07001
Autor:
Shao, Sen, Chiu, Wei-Chi, Hossain, Md Shafayat, Hou, Tao, Wang, Naizhou, Belopolski, Ilya, Zhao, Yilin, Ni, Jinyang, Zhang, Qi, Li, Yongkai, Liu, Jinjin, Yahyavi, Mohammad, Jin, Yuanjun, Feng, Qiange, Cui, Peiyuan, Zhang, Cheng-Long, Yao, Yugui, Wang, Zhiwei, Yin, Jia-Xin, Xu, Su-Yang, Ma, Qiong, Gao, Wei-bo, Bansil, Arun, Hasan, M. Zahid, Chang, Guoqing
Implementing and tuning chirality is fundamental in physics, chemistry, and material science. Chiral charge density waves (CDWs), where chirality arises from correlated charge orders, are attracting intense interest due to their exotic transport and
Externí odkaz:
http://arxiv.org/abs/2411.03664
Autor:
Su, Aofeng, Wang, Aowen, Ye, Chao, Zhou, Chen, Zhang, Ga, Chen, Gang, Zhu, Guangcheng, Wang, Haobo, Xu, Haokai, Chen, Hao, Li, Haoze, Lan, Haoxuan, Tian, Jiaming, Yuan, Jing, Zhao, Junbo, Zhou, Junlin, Shou, Kaizhe, Zha, Liangyu, Long, Lin, Li, Liyao, Wu, Pengzuo, Zhang, Qi, Huang, Qingyi, Yang, Saisai, Zhang, Tao, Ye, Wentao, Zhu, Wufang, Hu, Xiaomeng, Gu, Xijun, Sun, Xinjie, Li, Xiang, Yang, Yuhang, Xiao, Zhiqing
The emergence of models like GPTs, Claude, LLaMA, and Qwen has reshaped AI applications, presenting vast new opportunities across industries. Yet, the integration of tabular data remains notably underdeveloped, despite its foundational role in numero
Externí odkaz:
http://arxiv.org/abs/2411.02059
While numerous forecasters have been proposed using different network architectures, the Transformer-based models have state-of-the-art performance in time series forecasting. However, forecasters based on Transformers are still suffering from vulner
Externí odkaz:
http://arxiv.org/abs/2411.01623
Autor:
Zhang, Yudi, Xiao, Pei, Wang, Lu, Zhang, Chaoyun, Fang, Meng, Du, Yali, Puzyrev, Yevgeniy, Yao, Randolph, Qin, Si, Lin, Qingwei, Pechenizkiy, Mykola, Zhang, Dongmei, Rajmohan, Saravan, Zhang, Qi
In-context learning (ICL) and Retrieval-Augmented Generation (RAG) have gained attention for their ability to enhance LLMs' reasoning by incorporating external knowledge but suffer from limited contextual window size, leading to insufficient informat
Externí odkaz:
http://arxiv.org/abs/2411.03349
Autor:
Ding, Yiwen, Xi, Zhiheng, He, Wei, Li, Zhuoyuan, Zhai, Yitao, Shi, Xiaowei, Cai, Xunliang, Gui, Tao, Zhang, Qi, Huang, Xuanjing
Self-improvement methods enable large language models (LLMs) to generate solutions themselves and iteratively train on filtered, high-quality rationales. This process proves effective and reduces the reliance on human supervision in LLMs' reasoning,
Externí odkaz:
http://arxiv.org/abs/2411.00750