Zobrazeno 1 - 10
of 1 183
pro vyhledávání: '"WANG Xiaozhi"'
Autor:
Bai, Yushi, Tu, Shangqing, Zhang, Jiajie, Peng, Hao, Wang, Xiaozhi, Lv, Xin, Cao, Shulin, Xu, Jiazheng, Hou, Lei, Dong, Yuxiao, Tang, Jie, Li, Juanzi
This paper introduces LongBench v2, a benchmark designed to assess the ability of LLMs to handle long-context problems requiring deep understanding and reasoning across real-world multitasks. LongBench v2 consists of 503 challenging multiple-choice q
Externí odkaz:
http://arxiv.org/abs/2412.15204
Large language models (LLMs) struggle to follow instructions with complex constraints in format, length, etc. Following the conventional instruction-tuning practice, previous works conduct post-training on complex instruction-response pairs generated
Externí odkaz:
http://arxiv.org/abs/2410.24175
Autor:
Xiao, Chaojun, Zhang, Zhengyan, Song, Chenyang, Jiang, Dazhi, Yao, Feng, Han, Xu, Wang, Xiaozhi, Wang, Shuo, Huang, Yufei, Lin, Guanyu, Chen, Yingfa, Zhao, Weilin, Tu, Yuge, Zhong, Zexuan, Zhang, Ao, Si, Chenglei, Moo, Khai Hao, Zhao, Chenyang, Chen, Huimin, Lin, Yankai, Liu, Zhiyuan, Shang, Jingbo, Sun, Maosong
Advancements in LLMs have recently unveiled challenges tied to computational efficiency and continual scalability due to their requirements of huge parameters, making the applications and evolution of these models on devices with limited computation
Externí odkaz:
http://arxiv.org/abs/2409.02877
Future event prediction (FEP) is a long-standing and crucial task in the world, as understanding the evolution of events enables early risk identification, informed decision-making, and strategic planning. Existing work typically treats event predict
Externí odkaz:
http://arxiv.org/abs/2408.06578
Event Factuality Detection (EFD) task determines the factuality of textual events, i.e., classifying whether an event is a fact, possibility, or impossibility, which is essential for faithfully understanding and utilizing event knowledge. However, du
Externí odkaz:
http://arxiv.org/abs/2407.15352
Large language models (LLMs) excel in various capabilities but also pose safety risks such as generating harmful content and misinformation, even after safety alignment. In this paper, we explore the inner mechanisms of safety alignment from the pers
Externí odkaz:
http://arxiv.org/abs/2406.14144
Autor:
Tu, Shangqing, Wang, Yuanchun, Yu, Jifan, Xie, Yuyang, Shi, Yaran, Wang, Xiaozhi, Zhang, Jing, Hou, Lei, Li, Juanzi
Large language models have achieved remarkable success on general NLP tasks, but they may fall short for domain-specific problems. Recently, various Retrieval-Augmented Large Language Models (RALLMs) are proposed to address this shortcoming. However,
Externí odkaz:
http://arxiv.org/abs/2406.11681
Event relation extraction (ERE) is a critical and fundamental challenge for natural language processing. Existing work mainly focuses on directly modeling the entire document, which cannot effectively handle long-range dependencies and information re
Externí odkaz:
http://arxiv.org/abs/2405.06890
Generative document retrieval, an emerging paradigm in information retrieval, learns to build connections between documents and identifiers within a single model, garnering significant attention. However, there are still two challenges: (1) neglectin
Externí odkaz:
http://arxiv.org/abs/2405.06886
Large language models (LLMs) usually fall short on information extraction (IE) tasks and struggle to follow the complex instructions of IE tasks. This primarily arises from LLMs not being aligned with humans, as mainstream alignment datasets typicall
Externí odkaz:
http://arxiv.org/abs/2405.05008