Zobrazeno 1 - 10
of 21 381
pro vyhledávání: '"Li, HE"'
Autor:
Huang, Wenke, Liang, Jian, Shi, Zekun, Zhu, Didi, Wan, Guancheng, Li, He, Du, Bo, Tao, Dacheng, Ye, Mang
Multimodal Large Language Model (MLLM) have demonstrated strong generalization capabilities across diverse distributions and tasks, largely due to extensive pre-training datasets. Fine-tuning MLLM has become a common practice to improve performance o
Externí odkaz:
http://arxiv.org/abs/2411.10928
Autonomous cooperative planning (ACP) is a promising technique to improve the efficiency and safety of multi-vehicle interactions for future intelligent transportation systems. However, realizing robust ACP is a challenge due to the aggregation of pe
Externí odkaz:
http://arxiv.org/abs/2411.00413
Model compression methods are used to reduce the computation and energy requirements for Large Language Models (LLMs). Quantization Aware Training (QAT), an effective model compression method, is proposed to reduce performance degradation after quant
Externí odkaz:
http://arxiv.org/abs/2410.10849
Autor:
Zhong, Tianyang, Liu, Zhengliang, Pan, Yi, Zhang, Yutong, Zhou, Yifan, Liang, Shizhe, Wu, Zihao, Lyu, Yanjun, Shu, Peng, Yu, Xiaowei, Cao, Chao, Jiang, Hanqi, Chen, Hanxu, Li, Yiwei, Chen, Junhao, Hu, Huawen, Liu, Yihen, Zhao, Huaqin, Xu, Shaochen, Dai, Haixing, Zhao, Lin, Zhang, Ruidong, Zhao, Wei, Yang, Zhenyuan, Chen, Jingyuan, Wang, Peilong, Ruan, Wei, Wang, Hui, Zhao, Huan, Zhang, Jing, Ren, Yiming, Qin, Shihuan, Chen, Tong, Li, Jiaxi, Zidan, Arif Hassan, Jahin, Afrar, Chen, Minheng, Xia, Sichen, Holmes, Jason, Zhuang, Yan, Wang, Jiaqi, Xu, Bochen, Xia, Weiran, Yu, Jichao, Tang, Kaibo, Yang, Yaxuan, Sun, Bolun, Yang, Tao, Lu, Guoyu, Wang, Xianqiao, Chai, Lilong, Li, He, Lu, Jin, Sun, Lichao, Zhang, Xin, Ge, Bao, Hu, Xintao, Zhang, Lian, Zhou, Hua, Zhang, Lu, Zhang, Shu, Liu, Ninghao, Jiang, Bei, Kong, Linglong, Xiang, Zhen, Ren, Yudan, Liu, Jun, Jiang, Xi, Bao, Yu, Zhang, Wei, Li, Xiang, Li, Gang, Liu, Wei, Shen, Dinggang, Sikora, Andrea, Zhai, Xiaoming, Zhu, Dajiang, Liu, Tianming
This comprehensive study evaluates the performance of OpenAI's o1-preview large language model across a diverse array of complex reasoning tasks, spanning multiple domains, including computer science, mathematics, natural sciences, medicine, linguist
Externí odkaz:
http://arxiv.org/abs/2409.18486
Diffusion Models (DMs) achieve state-of-the-art synthesis results in image generation and have been applied to various fields. However, DMs sometimes seriously violate user privacy during usage, making the protection of privacy an urgent issue. Using
Externí odkaz:
http://arxiv.org/abs/2409.05414
Autor:
Zou, Nianlong, Li, He, Ye, Meng, Chen, Haowei, Sun, Minghui, Guo, Ruiping, Liu, Yizhou, Gu, Bing-Lin, Duan, Wenhui, Xu, Yong, Wang, Chong
Nonlinear optical (NLO) effects in materials with band crossings have attracted significant research interests due to the divergent band geometric quantities around these crossings. Most current research has focused on band crossings between the vale
Externí odkaz:
http://arxiv.org/abs/2409.01682
Autor:
Zhao, Zihui, Yao, Yisong, Li, He, Zhao, Yongfeng, Wang, Yujia, Zhang, Hepeng, Chat'e, Hugues, Sano, Masaki
Cell layers are often categorized as contractile or extensile active nematics but recent experiments on neural progenitor cells with induced $+1$ topological defects challenge this classification. In a bottom-up approach, we first study a relevant pa
Externí odkaz:
http://arxiv.org/abs/2408.15431
The interplay between cosmology and strongly coupled dynamics can yield transient spectral features that vanish at late times, but which may leave behind phenomenological signatures in the spectrum of primordial fluctuations. Of particular interest a
Externí odkaz:
http://arxiv.org/abs/2408.08951
Autor:
Yuan, Zilong, Tang, Zechen, Tao, Honggeng, Gong, Xiaoxun, Chen, Zezhou, Wang, Yuxiang, Li, He, Li, Yang, Xu, Zhiming, Sun, Minghui, Zhao, Boheng, Wang, Chong, Duan, Wenhui, Xu, Yong
Deep learning electronic structures from ab initio calculations holds great potential to revolutionize computational materials studies. While existing methods proved success in deep-learning density functional theory (DFT) Hamiltonian matrices, they
Externí odkaz:
http://arxiv.org/abs/2407.14379
The emergence of large language models (LLMs) is a milestone in generative artificial intelligence, achieving significant success in text comprehension and generation tasks. Despite the tremendous success of LLMs in many downstream tasks, they suffer
Externí odkaz:
http://arxiv.org/abs/2407.10153