Zobrazeno 1 - 10
of 41 602
pro vyhledávání: '"Multi-perspective"'
Autor:
Zhang, Yongheng, Chen, Qiguang, Zhou, Jingxuan, Wang, Peng, Si, Jiasheng, Wang, Jin, Lu, Wenpeng, Qin, Libo
Chain-of-Thought (CoT) has become a vital technique for enhancing the performance of Large Language Models (LLMs), attracting increasing attention from researchers. One stream of approaches focuses on the iterative enhancement of LLMs by continuously
Externí odkaz:
http://arxiv.org/abs/2410.04463
Autor:
Wu, Jiayi, Cai, Hengyi, Yan, Lingyong, Sun, Hao, Li, Xiang, Wang, Shuaiqiang, Yin, Dawei, Gao, Ming
The emergence of Retrieval-augmented generation (RAG) has alleviated the issues of outdated and hallucinatory content in the generation of large language models (LLMs), yet it still reveals numerous limitations. When a general-purpose LLM serves as t
Externí odkaz:
http://arxiv.org/abs/2412.14510
Comparative reviews are pivotal in understanding consumer preferences and influencing purchasing decisions. Comparative Quintuple Extraction (COQE) aims to identify five key components in text: the target entity, compared entities, compared aspects,
Externí odkaz:
http://arxiv.org/abs/2412.08508
Autor:
Wang, Qi, Zhou, Jinjia
We propose a novel and efficient logit distillation method, Multi-perspective Contrastive Logit Distillation (MCLD), which leverages contrastive learning to distill logits from multiple perspectives in knowledge distillation. Recent research on logit
Externí odkaz:
http://arxiv.org/abs/2411.10693
Subjective NLP tasks usually rely on human annotations provided by multiple annotators, whose judgments may vary due to their diverse backgrounds and life experiences. Traditional methods often aggregate multiple annotations into a single ground trut
Externí odkaz:
http://arxiv.org/abs/2411.08752
Large Language Models (LLMs) are pivotal AI agents in complex tasks but still face challenges in open decision-making problems within complex scenarios. To address this, we use the language logic game ``Who is Undercover?'' (WIU) as an experimental p
Externí odkaz:
http://arxiv.org/abs/2410.15311
The lack of data transparency in Large Language Models (LLMs) has highlighted the importance of Membership Inference Attack (MIA), which differentiates trained (member) and untrained (non-member) data. Though it shows success in previous studies, rec
Externí odkaz:
http://arxiv.org/abs/2412.13475
Neural machine translation (NMT) systems amplify lexical biases present in their training data, leading to artificially impoverished language in output translations. These language-level characteristics render automatic translations different from te
Externí odkaz:
http://arxiv.org/abs/2412.08473
Climate change poses critical challenges globally, disproportionately affecting low-income countries that often lack resources and linguistic representation on the international stage. Despite Bangladesh's status as one of the most vulnerable nations
Externí odkaz:
http://arxiv.org/abs/2410.17225
We introduce a task and dataset for referring expression generation and comprehension in multi-agent embodied environments. In this task, two agents in a shared scene must take into account one another's visual perspective, which may be different fro
Externí odkaz:
http://arxiv.org/abs/2410.03959