Zobrazeno 1 - 10
of 5 231
pro vyhledávání: '"Ji,Rong"'
Autor:
Cheng, Yiruo, Mao, Kelong, Zhao, Ziliang, Dong, Guanting, Qian, Hongjin, Wu, Yongkang, Sakai, Tetsuya, Wen, Ji-Rong, Dou, Zhicheng
Retrieval-Augmented Generation (RAG) has become a powerful paradigm for enhancing large language models (LLMs) through external knowledge retrieval. Despite its widespread attention, existing academic research predominantly focuses on single-turn RAG
Externí odkaz:
http://arxiv.org/abs/2410.23090
Zero-shot in-context learning (ZS-ICL) aims to conduct in-context learning (ICL) without using human-annotated demonstrations. Most ZS-ICL methods use large language models (LLMs) to generate (input, label) pairs as pseudo-demonstrations and leverage
Externí odkaz:
http://arxiv.org/abs/2410.20215
Autor:
Du, Yifan, Huo, Yuqi, Zhou, Kun, Zhao, Zijia, Lu, Haoyu, Huang, Han, Zhao, Wayne Xin, Wang, Bingning, Chen, Weipeng, Wen, Ji-Rong
Video Multimodal Large Language Models (MLLMs) have shown remarkable capability of understanding the video semantics on various downstream tasks. Despite the advancements, there is still a lack of systematic research on visual context representation,
Externí odkaz:
http://arxiv.org/abs/2410.13694
Large language models (LLMs) have become increasingly proficient at simulating various personality traits, an important capability for supporting related applications (e.g., role-playing). To further improve this capacity, in this paper, we present a
Externí odkaz:
http://arxiv.org/abs/2410.12327
Multimodal learning is expected to boost model performance by integrating information from different modalities. However, its potential is not fully exploited because the widely-used joint training strategy, which has a uniform objective for all moda
Externí odkaz:
http://arxiv.org/abs/2410.11582
Following natural instructions is crucial for the effective application of Retrieval-Augmented Generation (RAG) systems. Despite recent advancements in Large Language Models (LLMs), research on assessing and improving instruction-following (IF) align
Externí odkaz:
http://arxiv.org/abs/2410.09584
Autor:
Qu, Changle, Dai, Sunhao, Wei, Xiaochi, Cai, Hengyi, Wang, Shuaiqiang, Yin, Dawei, Xu, Jun, Wen, Ji-Rong
Tool learning enables Large Language Models (LLMs) to interact with external environments by invoking tools, serving as an effective strategy to mitigate the limitations inherent in their pre-training data. In this process, tool documentation plays a
Externí odkaz:
http://arxiv.org/abs/2410.08197
Autor:
Chen, Zhipeng, Song, Liang, Zhou, Kun, Zhao, Wayne Xin, Wang, Bingning, Chen, Weipeng, Wen, Ji-Rong
Multi-lingual ability transfer has become increasingly important for the broad application of large language models (LLMs). Existing work highly relies on training with the multi-lingual ability-related data, which may be not available for low-resour
Externí odkaz:
http://arxiv.org/abs/2410.07825
Autor:
Tang, Jiakai, Gao, Heyang, Pan, Xuchen, Wang, Lei, Tan, Haoran, Gao, Dawei, Chen, Yushuo, Chen, Xu, Lin, Yankai, Li, Yaliang, Ding, Bolin, Zhou, Jingren, Wang, Jun, Wen, Ji-Rong
With the rapid advancement of large language models (LLMs), recent years have witnessed many promising studies on leveraging LLM-based agents to simulate human social behavior. While prior work has demonstrated significant potential across various do
Externí odkaz:
http://arxiv.org/abs/2410.04360
Autor:
Zhang, Zeyu, Dai, Quanyu, Chen, Luyu, Jiang, Zeren, Li, Rui, Zhu, Jieming, Chen, Xu, Xie, Yi, Dong, Zhenhua, Wen, Ji-Rong
LLM-based agents have been widely applied as personal assistants, capable of memorizing information from user messages and responding to personal queries. However, there still lacks an objective and automatic evaluation on their memory capability, la
Externí odkaz:
http://arxiv.org/abs/2409.20163