Zobrazeno 1 - 10
of 650
pro vyhledávání: '"XU Ruifeng"'
Model merging has gained increasing attention as an efficient and effective technique for integrating task-specific weights from various tasks into a unified multi-task model without retraining or additional data. As a representative approach, Task A
Externí odkaz:
http://arxiv.org/abs/2411.18729
Idioms represent a ubiquitous vehicle for conveying sentiments in the realm of everyday discourse, rendering the nuanced analysis of idiom sentiment crucial for a comprehensive understanding of emotional expression within real-world texts. Neverthele
Externí odkaz:
http://arxiv.org/abs/2409.17588
The success of Large Language Models (LLMs) relies heavily on the huge amount of pre-training data learned in the pre-training phase. The opacity of the pre-training process and the training data causes the results of many benchmark tests to become u
Externí odkaz:
http://arxiv.org/abs/2409.01790
Large Language Models (LLMs) have demonstrated exceptional performance across various natural language processing tasks, yet they occasionally tend to yield content that factually inaccurate or discordant with the expected output, a phenomenon empiri
Externí odkaz:
http://arxiv.org/abs/2408.08769
Autor:
Lin, Jiayu, Chen, Guanrong, Jin, Bojun, Li, Chenyang, Jia, Shutong, Lin, Wancong, Sun, Yang, He, Yuhang, Yang, Caihua, Bao, Jianzhu, Wu, Jipeng, Su, Wen, Chen, Jinglu, Li, Xinyi, Chen, Tianyu, Han, Mingjie, Du, Shuaiwen, Wang, Zijian, Li, Jiyin, Suo, Fuzhong, Wang, Hao, Lin, Nuanchen, Huang, Xuanjing, Jiang, Changjian, Xu, RuiFeng, Zhang, Long, Cao, Jiuxin, Jin, Ting, Wei, Zhongyu
In this paper we present the results of the AI-Debater 2023 Challenge held by the Chinese Conference on Affect Computing (CCAC 2023), and introduce the related datasets. We organize two tracks to handle the argumentative generation tasks in different
Externí odkaz:
http://arxiv.org/abs/2407.14829
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review, which is the most representative and challenging task in aspect-based sentiment analysis. A key cha
Externí odkaz:
http://arxiv.org/abs/2406.18078
Autor:
Liu, Ziqiang, Fang, Feiteng, Feng, Xi, Du, Xinrun, Zhang, Chenhao, Wang, Zekun, Bai, Yuelin, Zhao, Qixuan, Fan, Liyang, Gan, Chengguang, Lin, Hongquan, Li, Jiaming, Ni, Yuansheng, Wu, Haihong, Narsupalli, Yaswanth, Zheng, Zhigang, Li, Chengming, Hu, Xiping, Xu, Ruifeng, Chen, Xiaojun, Yang, Min, Liu, Jiaheng, Liu, Ruibo, Huang, Wenhao, Zhang, Ge, Ni, Shiwen
The rapid advancements in the development of multimodal large language models (MLLMs) have consistently led to new breakthroughs on various benchmarks. In response, numerous challenging and comprehensive benchmarks have been proposed to more accurate
Externí odkaz:
http://arxiv.org/abs/2406.05862
Large language models (LLMs) have achieved promising results in sentiment analysis through the in-context learning (ICL) paradigm. However, their ability to distinguish subtle sentiments still remains a challenge. Inspired by the human ability to adj
Externí odkaz:
http://arxiv.org/abs/2406.02911
Numeral systems and units of measurement are two conjoined topics in activities of human beings and have mutual effects with the languages expressing them. Currently, the evaluation of Large Language Models (LLMs) often involves mathematical reasonin
Externí odkaz:
http://arxiv.org/abs/2406.02864
Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
Publikováno v:
ACL 2024, Main Conference
Large Language Models (LLMs) exhibit substantial capabilities yet encounter challenges, including hallucination, outdated knowledge, and untraceable reasoning processes. Retrieval-augmented generation (RAG) has emerged as a promising solution, integr
Externí odkaz:
http://arxiv.org/abs/2405.20978