Zobrazeno 1 - 10
of 358
pro vyhledávání: '"Zhang, Ningyu"'
Autor:
Tian, Bozhong, Liang, Xiaozhuan, Cheng, Siyuan, Liu, Qingbin, Wang, Mengru, Sui, Dianbo, Chen, Xi, Chen, Huajun, Zhang, Ningyu
Large Language Models (LLMs) trained on extensive corpora inevitably retain sensitive data, such as personal privacy information and copyrighted material. Recent advancements in knowledge unlearning involve updating LLM parameters to erase specific k
Externí odkaz:
http://arxiv.org/abs/2407.01920
Autor:
Zhou, Wangchunshu, Ou, Yixin, Ding, Shengwei, Li, Long, Wu, Jialong, Wang, Tiannan, Chen, Jiamin, Wang, Shuai, Xu, Xiaohua, Zhang, Ningyu, Chen, Huajun, Jiang, Yuchen Eleanor
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While languag
Externí odkaz:
http://arxiv.org/abs/2406.18532
Autor:
Lai, Chengyu, Zhou, Sheng, Jiang, Zhimeng, Tan, Qiaoyu, Bei, Yuanchen, Chen, Jiawei, Zhang, Ningyu, Bu, Jiajun
Recommendation systems play a pivotal role in suggesting items to users based on their preferences. However, in online platforms, these systems inevitably offer unsuitable recommendations due to limited model capacity, poor data quality, or evolving
Externí odkaz:
http://arxiv.org/abs/2406.04553
The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store
Externí odkaz:
http://arxiv.org/abs/2405.17969
Autor:
Qiao, Shuofei, Fang, Runnan, Zhang, Ningyu, Zhu, Yuqi, Chen, Xiang, Deng, Shumin, Jiang, Yong, Xie, Pengjun, Huang, Fei, Chen, Huajun
Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in glob
Externí odkaz:
http://arxiv.org/abs/2405.14205
Autor:
Wang, Peng, Li, Zexi, Zhang, Ningyu, Xu, Ziwen, Yao, Yunzhi, Jiang, Yong, Xie, Pengjun, Huang, Fei, Chen, Huajun
Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses, facilitating the methods of lifelong model editing. Where the updated knowledge resides in memories is a fundamental ques
Externí odkaz:
http://arxiv.org/abs/2405.14768
Autor:
Mao, Shengyu, Jiang, Yong, Chen, Boli, Li, Xiao, Wang, Peng, Wang, Xinyu, Xie, Pengjun, Huang, Fei, Chen, Huajun, Zhang, Ningyu
As Large Language Models (LLMs) and Retrieval Augmentation Generation (RAG) techniques have evolved, query rewriting has been widely incorporated into the RAG system for downstream tasks like open-domain QA. Many works have attempted to utilize small
Externí odkaz:
http://arxiv.org/abs/2405.14431
Autor:
Wang, Mengru, Zhang, Ningyu, Xu, Ziwen, Xi, Zekun, Deng, Shumin, Yao, Yunzhi, Zhang, Qishen, Yang, Linyi, Wang, Jindong, Chen, Huajun
This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs). We construct a benchmark, SafeEdit, which covers nine unsafe categories with various powerful attack prompts and equips comprehensive metrics for sys
Externí odkaz:
http://arxiv.org/abs/2403.14472
Human motion prediction is consisting in forecasting future body poses from historically observed sequences. It is a longstanding challenge due to motion's complex dynamics and uncertainty. Existing methods focus on building up complicated neural net
Externí odkaz:
http://arxiv.org/abs/2403.14104
Autor:
Wang, Xiaohan, Mao, Shengyu, Zhang, Ningyu, Deng, Shumin, Yao, Yunzhi, Shen, Yue, Liang, Lei, Gu, Jinjie, Chen, Huajun
Recently, there has been a growing interest in knowledge editing for Large Language Models (LLMs). Current approaches and evaluations merely explore the instance-level editing, while whether LLMs possess the capability to modify concepts remains uncl
Externí odkaz:
http://arxiv.org/abs/2403.06259