Zobrazeno 1 - 10
of 761
pro vyhledávání: '"Liu, Shenghua"'
Autor:
Bi, Baolong, Liu, Shenghua, Wang, Yiwei, Mei, Lingrui, Gao, Hongcheng, Fang, Junfeng, Cheng, Xueqi
As the modern tool of choice for question answering, large language models (LLMs) are expected to deliver answers with up-to-date knowledge. To achieve such ideal question-answering systems, locating and then editing outdated knowledge in the natural
Externí odkaz:
http://arxiv.org/abs/2409.10132
Autor:
Bi, Baolong, Liu, Shenghua, Wang, Yiwei, Mei, Lingrui, Gao, Hongcheng, Xu, Yilong, Cheng, Xueqi
The parametric knowledge memorized by large language models (LLMs) becomes outdated quickly. In-context editing (ICE) is currently the most effective method for updating the knowledge of LLMs. Recent advancements involve enhancing ICE by modifying th
Externí odkaz:
http://arxiv.org/abs/2406.12468
"Jailbreak" is a major safety concern of Large Language Models (LLMs), which occurs when malicious prompts lead LLMs to produce harmful outputs, raising issues about the reliability and safety of LLMs. Therefore, an effective evaluation of jailbreaks
Externí odkaz:
http://arxiv.org/abs/2406.11668
The knowledge within large language models (LLMs) may become outdated quickly. While in-context editing (ICE) is currently the most effective method for knowledge editing (KE), it is constrained by the black-box modeling of LLMs and thus lacks interp
Externí odkaz:
http://arxiv.org/abs/2405.11613
The rapid development of large language models (LLMs) enables them to convey factual knowledge in a more human-like fashion. Extensive efforts have been made to reduce factual hallucinations by modifying LLMs with factuality decoding. However, they a
Externí odkaz:
http://arxiv.org/abs/2404.00216
In recent years, large language models have achieved state-of-the-art performance across multiple domains. However, the progress in the field of graph reasoning with LLM remains limited. Our work delves into this gap by thoroughly investigating graph
Externí odkaz:
http://arxiv.org/abs/2402.07140
Exploring the application of large language models (LLMs) to graph learning is a emerging endeavor. However, the vast amount of information inherent in large graphs poses significant challenges to this process. This work focuses on the link predictio
Externí odkaz:
http://arxiv.org/abs/2401.13227
The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of large language models (LLMs). Traditionally anchored to static datasets, these models often struggle
Externí odkaz:
http://arxiv.org/abs/2401.12585
Densest Subgraph Problem (DSP) is an important primitive problem with a wide range of applications, including fraud detection, community detection and DNA motif discovery. Edge-based density is one of the most common metrics in DSP. Although a maximu
Externí odkaz:
http://arxiv.org/abs/2307.15969
A nanocrystalline metal's strength increases significantly as its grain size decreases, a phenomenon known as the Hall-Petch relation. Such relation, however, breaks down when the grains become too small. Experimental studies have circumvented this p
Externí odkaz:
http://arxiv.org/abs/2302.08698