Zobrazeno 1 - 10
of 146
pro vyhledávání: '"Chen, Huajun"'
Knowledge Graph Embedding (KGE) is a common method for Knowledge Graphs (KGs) to serve various artificial intelligence tasks. The suitable dimensions of the embeddings depend on the storage and computing conditions of the specific application scenari
Externí odkaz:
http://arxiv.org/abs/2407.02779
Graph Neural Network (GNN), with the main idea of encoding graph structure information of graphs by propagation and aggregation, has developed rapidly. It achieved excellent performance in representation learning of multiple types of graphs such as h
Externí odkaz:
http://arxiv.org/abs/2407.02762
Autor:
Tian, Bozhong, Liang, Xiaozhuan, Cheng, Siyuan, Liu, Qingbin, Wang, Mengru, Sui, Dianbo, Chen, Xi, Chen, Huajun, Zhang, Ningyu
Large Language Models (LLMs) trained on extensive corpora inevitably retain sensitive data, such as personal privacy information and copyrighted material. Recent advancements in knowledge unlearning involve updating LLM parameters to erase specific k
Externí odkaz:
http://arxiv.org/abs/2407.01920
Autor:
Zhou, Wangchunshu, Ou, Yixin, Ding, Shengwei, Li, Long, Wu, Jialong, Wang, Tiannan, Chen, Jiamin, Wang, Shuai, Xu, Xiaohua, Zhang, Ningyu, Chen, Huajun, Jiang, Yuchen Eleanor
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing "language agents", which are complex large language models (LLMs) pipelines involving both prompting techniques and tool usage methods. While languag
Externí odkaz:
http://arxiv.org/abs/2406.18532
The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store
Externí odkaz:
http://arxiv.org/abs/2405.17969
Autor:
Qiao, Shuofei, Fang, Runnan, Zhang, Ningyu, Zhu, Yuqi, Chen, Xiang, Deng, Shumin, Jiang, Yong, Xie, Pengjun, Huang, Fei, Chen, Huajun
Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in glob
Externí odkaz:
http://arxiv.org/abs/2405.14205
Autor:
Wang, Peng, Li, Zexi, Zhang, Ningyu, Xu, Ziwen, Yao, Yunzhi, Jiang, Yong, Xie, Pengjun, Huang, Fei, Chen, Huajun
Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses, facilitating the methods of lifelong model editing. Where the updated knowledge resides in memories is a fundamental ques
Externí odkaz:
http://arxiv.org/abs/2405.14768
The past years have witnessed a proliferation of large language models (LLMs). Yet, automated and unbiased evaluation of LLMs is challenging due to the inaccuracy of standard metrics in reflecting human preferences and the inefficiency in sampling in
Externí odkaz:
http://arxiv.org/abs/2404.08008
Autor:
Wang, Mengru, Zhang, Ningyu, Xu, Ziwen, Xi, Zekun, Deng, Shumin, Yao, Yunzhi, Zhang, Qishen, Yang, Linyi, Wang, Jindong, Chen, Huajun
This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs). We construct a benchmark, SafeEdit, which covers nine unsafe categories with various powerful attack prompts and equips comprehensive metrics for sys
Externí odkaz:
http://arxiv.org/abs/2403.14472
Answering logical queries on knowledge graphs (KG) poses a significant challenge for machine reasoning. The primary obstacle in this task stems from the inherent incompleteness of KGs. Existing research has predominantly focused on addressing the iss
Externí odkaz:
http://arxiv.org/abs/2403.12646