Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Qi, Zehan"'
Autor:
Liu, Xiao, Zhang, Tianjie, Gu, Yu, Iong, Iat Long, Xu, Yifan, Song, Xixuan, Zhang, Shudan, Lai, Hanyu, Liu, Xinyi, Zhao, Hanlin, Sun, Jiadai, Yang, Xinyue, Yang, Yu, Qi, Zehan, Yao, Shuntian, Sun, Xueqiao, Cheng, Siyi, Zheng, Qinkai, Yu, Hao, Zhang, Hanchen, Hong, Wenyi, Ding, Ming, Pan, Lihang, Gu, Xiaotao, Zeng, Aohan, Du, Zhengxiao, Song, Chan Hee, Su, Yu, Dong, Yuxiao, Tang, Jie
Large Multimodal Models (LMMs) have ushered in a new era in artificial intelligence, merging capabilities in both language and vision to form highly capable Visual Foundation Agents. These agents are postulated to excel across a myriad of tasks, pote
Externí odkaz:
http://arxiv.org/abs/2408.06327
The rise of large language models (LLMs) has enabled us to seek answers to inherently debatable questions on LLM chatbots, necessitating a reliable way to evaluate their ability. However, traditional QA benchmarks assume fixed answers are inadequate
Externí odkaz:
http://arxiv.org/abs/2408.01419
The common toxicity and societal bias in contents generated by large language models (LLMs) necessitate strategies to reduce harm. Present solutions often demand white-box access to the model or substantial training, which is impractical for cutting-
Externí odkaz:
http://arxiv.org/abs/2407.15366
Autor:
Zeng, Zhongshen, Liu, Yinhong, Wan, Yingjia, Li, Jingyao, Chen, Pengguang, Dai, Jianbo, Yao, Yuxuan, Xu, Rongwu, Qi, Zehan, Zhao, Wanru, Shen, Linling, Lu, Jianqiao, Tan, Haochen, Chen, Yukang, Zhang, Hao, Shi, Zhan, Wang, Bailin, Guo, Zhijiang, Jia, Jiaya
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making, largely based on the step-by-step chain-of-thought reasoning processes. However, it has been increasingly challenging to evaluate the reasoning capa
Externí odkaz:
http://arxiv.org/abs/2406.13975
Autor:
GLM, Team, Zeng, Aohan, Xu, Bin, Wang, Bowen, Zhang, Chenhui, Yin, Da, Zhang, Dan, Rojas, Diego, Feng, Guanyu, Zhao, Hanlin, Lai, Hanyu, Yu, Hao, Wang, Hongning, Sun, Jiadai, Zhang, Jiajie, Cheng, Jiale, Gui, Jiayi, Tang, Jie, Zhang, Jing, Sun, Jingyu, Li, Juanzi, Zhao, Lei, Wu, Lindong, Zhong, Lucen, Liu, Mingdao, Huang, Minlie, Zhang, Peng, Zheng, Qinkai, Lu, Rui, Duan, Shuaiqi, Zhang, Shudan, Cao, Shulin, Yang, Shuxun, Tam, Weng Lam, Zhao, Wenyi, Liu, Xiao, Xia, Xiao, Zhang, Xiaohan, Gu, Xiaotao, Lv, Xin, Liu, Xinghan, Liu, Xinyi, Yang, Xinyue, Song, Xixuan, Zhang, Xunkai, An, Yifan, Xu, Yifan, Niu, Yilin, Yang, Yuantao, Li, Yueyan, Bai, Yushi, Dong, Yuxiao, Qi, Zehan, Wang, Zhaoyu, Yang, Zhen, Du, Zhengxiao, Hou, Zhenyu, Wang, Zihan
We introduce ChatGLM, an evolving family of large language models that we have been developing over time. This report primarily focuses on the GLM-4 language series, which includes GLM-4, GLM-4-Air, and GLM-4-9B. They represent our most capable model
Externí odkaz:
http://arxiv.org/abs/2406.12793
Large language models (LLMs) showcase impressive reasoning capabilities when coupled with Chain-of-Thought (CoT) prompting. However, the robustness of this approach warrants further investigation. In this paper, we introduce a novel scenario termed p
Externí odkaz:
http://arxiv.org/abs/2405.20902
Autor:
Zhang, Shudan, Zhao, Hanlin, Liu, Xiao, Zheng, Qinkai, Qi, Zehan, Gu, Xiaotao, Zhang, Xiaohan, Dong, Yuxiao, Tang, Jie
Large language models (LLMs) have manifested strong ability to generate codes for productive activities. However, current benchmarks for code synthesis, such as HumanEval, MBPP, and DS-1000, are predominantly oriented towards introductory tasks on al
Externí odkaz:
http://arxiv.org/abs/2405.04520
This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge. Our focus is on three categories of knowledge con
Externí odkaz:
http://arxiv.org/abs/2403.08319
Autor:
Wang, Cunxiang, Liu, Xiaoze, Yue, Yuanhao, Tang, Xiangru, Zhang, Tianhang, Jiayang, Cheng, Yao, Yunzhi, Gao, Wenyang, Hu, Xuming, Qi, Zehan, Wang, Yidong, Yang, Linyi, Wang, Jindong, Xie, Xing, Zhang, Zheng, Zhang, Yue
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital. We define the Factuality Issue as the probability of
Externí odkaz:
http://arxiv.org/abs/2310.07521