Zobrazeno 1 - 10
of 64
pro vyhledávání: '"Tu, Quan"'
Large language model agents have demonstrated remarkable advancements across various complex tasks. Recent works focus on optimizing the agent team or employing self-reflection to iteratively solve complex tasks. Since these agents are all based on t
Externí odkaz:
http://arxiv.org/abs/2404.05569
Large Language Models (LLMs) demonstrate superior performance in generative scenarios and have attracted widespread attention. Among them, stylized dialogue generation is essential in the context of LLMs for building intelligent and engaging dialogue
Externí odkaz:
http://arxiv.org/abs/2403.11439
Standard Large Language Models (LLMs) struggle with handling dialogues with long contexts due to efficiency and consistency issues. According to our observation, dialogue contexts are highly structured, and the special token of \textit{End-of-Utteran
Externí odkaz:
http://arxiv.org/abs/2403.08312
Most existing news recommendation methods tackle this task by conducting semantic matching between candidate news and user representation produced by historical clicked news. However, they overlook the high-level connections among different news arti
Externí odkaz:
http://arxiv.org/abs/2403.03424
Personalized dialogue systems have gained significant attention in recent years for their ability to generate responses in alignment with different personas. However, most existing approaches rely on pre-defined personal profiles, which are not only
Externí odkaz:
http://arxiv.org/abs/2403.03102
Recently, the advent of large language models (LLMs) has revolutionized generative agents. Among them, Role-Playing Conversational Agents (RPCAs) attract considerable attention due to their ability to emotionally engage users. However, the absence of
Externí odkaz:
http://arxiv.org/abs/2401.01275
The rapid evolution of large language models necessitates effective benchmarks for evaluating their role knowledge, which is essential for establishing connections with the real world and providing more immersive interactions. This paper introduces R
Externí odkaz:
http://arxiv.org/abs/2312.16132
Recent studies have highlighted a phenomenon in large language models (LLMs) known as "the reversal curse," in which the order of knowledge entities in the training data biases the models' comprehension. For example, if a model is trained on sentence
Externí odkaz:
http://arxiv.org/abs/2311.07468
InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews
Autor:
Wang, Xintao, Xiao, Yunze, Huang, Jen-tse, Yuan, Siyu, Xu, Rui, Guo, Haoran, Tu, Quan, Fei, Yaying, Leng, Ziang, Wang, Wei, Chen, Jiangjie, Li, Cheng, Xiao, Yanghua
Role-playing agents (RPAs), powered by large language models, have emerged as a flourishing field of applications. However, a key challenge lies in assessing whether RPAs accurately reproduce the personas of target characters, namely their character
Externí odkaz:
http://arxiv.org/abs/2310.17976
CycleAlign: Iterative Distillation from Black-box LLM to White-box Models for Better Human Alignment
Language models trained on large-scale corpus often generate content that is harmful, toxic, or contrary to human preferences, making their alignment with human values a critical concern. Reinforcement learning from human feedback (RLHF) with algorit
Externí odkaz:
http://arxiv.org/abs/2310.16271