Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Xi, Zekun"'
Autor:
Zhang, Ningyu, Xi, Zekun, Luo, Yujie, Wang, Peng, Tian, Bozhong, Yao, Yunzhi, Zhang, Jintian, Deng, Shumin, Sun, Mengshu, Liang, Lei, Zhang, Zhiqiang, Zhu, Xiaowei, Zhou, Jun, Chen, Huajun
Knowledge representation has been a central aim of AI since its inception. Symbolic Knowledge Graphs (KGs) and neural Large Language Models (LLMs) can both represent knowledge. KGs provide highly accurate and explicit knowledge representation, but fa
Externí odkaz:
http://arxiv.org/abs/2409.07497
The remarkable capabilities of modern large language models are rooted in their vast repositories of knowledge encoded within their parameters, enabling them to perceive the world and engage in reasoning. The inner workings of how these models store
Externí odkaz:
http://arxiv.org/abs/2405.17969
Autor:
Wang, Mengru, Zhang, Ningyu, Xu, Ziwen, Xi, Zekun, Deng, Shumin, Yao, Yunzhi, Zhang, Qishen, Yang, Linyi, Wang, Jindong, Chen, Huajun
This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs). We construct a benchmark, SafeEdit, which covers nine unsafe categories with various powerful attack prompts and equips comprehensive metrics for sys
Externí odkaz:
http://arxiv.org/abs/2403.14472
Autor:
Zhang, Ningyu, Yao, Yunzhi, Tian, Bozhong, Wang, Peng, Deng, Shumin, Wang, Mengru, Xi, Zekun, Mao, Shengyu, Zhang, Jintian, Ni, Yuansheng, Cheng, Siyuan, Xu, Ziwen, Xu, Xin, Gu, Jia-Chen, Jiang, Yong, Xie, Pengjun, Huang, Fei, Liang, Lei, Zhang, Zhiqiang, Zhu, Xiaowei, Zhou, Jun, Chen, Huajun
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication. However, a primary limitation lies in the significant computational demands during training, arising fro
Externí odkaz:
http://arxiv.org/abs/2401.01286
Autor:
Wang, Peng, Zhang, Ningyu, Tian, Bozhong, Xi, Zekun, Yao, Yunzhi, Xu, Ziwen, Wang, Mengru, Mao, Shengyu, Wang, Xiaohan, Cheng, Siyuan, Liu, Kangwei, Ni, Yuansheng, Zheng, Guozhou, Chen, Huajun
Large Language Models (LLMs) usually suffer from knowledge cutoff or fallacy issues, which means they are unaware of unseen events or generate text with incorrect facts owing to outdated/noisy data. To this end, many knowledge editing approaches for
Externí odkaz:
http://arxiv.org/abs/2308.07269
Knowledge Graphs (KGs) often have two characteristics: heterogeneous graph structure and text-rich entity/relation information. Text-based KG embeddings can represent entities by encoding descriptions with pre-trained language models, but no open-sou
Externí odkaz:
http://arxiv.org/abs/2210.00305