Zobrazeno 1 - 10
of 329
pro vyhledávání: '"Liu, Xiaozhong"'
Autor:
Wang, Haining, Clark, Jason, McKelvey, Hannah, Sterman, Leila, Gao, Zheng, Tian, Zuoyu, Kübler, Sandra, Liu, Xiaozhong
A vast amount of scholarly work is published daily, yet much of it remains inaccessible to the general public due to dense jargon and complex language. To address this challenge in science communication, we introduce a reinforcement learning framewor
Externí odkaz:
http://arxiv.org/abs/2410.17088
Autor:
Lin, Tianqianjin, Yan, Pengwei, Song, Kaisong, Jiang, Zhuoren, Kang, Yangyang, Lin, Jun, Yuan, Weikang, Cao, Junjie, Sun, Changlong, Liu, Xiaozhong
Graph foundation models (GFMs) have recently gained significant attention. However, the unique data processing and evaluation setups employed by different studies hinder a deeper understanding of their progress. Additionally, current research tends t
Externí odkaz:
http://arxiv.org/abs/2410.14961
User Satisfaction Estimation is an important task and increasingly being applied in goal-oriented dialogue systems to estimate whether the user is satisfied with the service. It is observed that whether the user's needs are met often triggers various
Externí odkaz:
http://arxiv.org/abs/2410.09556
Large Language Models (LLMs) have demonstrated exceptional capabilities in understanding and generating natural language. However, their high deployment costs often pose a barrier to practical applications, especially. Cascading local and server mode
Externí odkaz:
http://arxiv.org/abs/2410.08014
Autor:
Yuan, Weikang, Cao, Junjie, Jiang, Zhuoren, Kang, Yangyang, Lin, Jun, Song, Kaisong, lin, tianqianjin, Yan, Pengwei, Sun, Changlong, Liu, Xiaozhong
Large Language Models (LLMs) could struggle to fully understand legal theories and perform complex legal reasoning tasks. In this study, we introduce a challenging task (confusing charge prediction) to better evaluate LLMs' understanding of legal the
Externí odkaz:
http://arxiv.org/abs/2410.02507
Autor:
Zhang, Yuehan, Lv, Peizhuo, Liu, Yinpeng, Ma, Yongqiang, Lu, Wei, Wang, Xiaofeng, Liu, Xiaozhong, Liu, Jiawei
The rapid development of LLMs brings both convenience and potential threats. As costumed and private LLMs are widely applied, model copyright protection has become important. Text watermarking is emerging as a promising solution to AI-generated text
Externí odkaz:
http://arxiv.org/abs/2409.09739
Open source software (OSS) is integral to modern product development, and any vulnerability within it potentially compromises numerous products. While developers strive to apply security patches, pinpointing these patches among extensive OSS updates
Externí odkaz:
http://arxiv.org/abs/2409.06816
Retrieval-Augmented Generation (RAG) is applied to solve hallucination problems and real-time constraints of large language models, but it also induces vulnerabilities against retrieval corruption attacks. Existing research mainly explores the unreli
Externí odkaz:
http://arxiv.org/abs/2407.13757
The integration of generative Large Language Models (LLMs) into various applications, including the legal domain, has been accelerated by their expansive and versatile nature. However, when facing a legal case, users without a legal background often
Externí odkaz:
http://arxiv.org/abs/2406.03600
Autor:
Xiong, Zi, Qing, Lizhi, Kang, Yangyang, Liu, Jiawei, Li, Hongsong, Sun, Changlong, Liu, Xiaozhong, Lu, Wei
The widespread use of pre-trained language models (PLMs) in natural language processing (NLP) has greatly improved performance outcomes. However, these models' vulnerability to adversarial attacks (e.g., camouflaged hints from drug dealers), particul
Externí odkaz:
http://arxiv.org/abs/2404.12014