Zobrazeno 1 - 10
of 357
pro vyhledávání: '"Li, Jundong"'
As Large Language Models (LLMs) are increasingly deployed to handle various natural language processing (NLP) tasks, concerns regarding the potential negative societal impacts of LLM-generated content have also arisen. To evaluate the biases exhibite
Externí odkaz:
http://arxiv.org/abs/2407.02408
Autor:
Tan, Zhen, Zhao, Chengshuai, Moraffah, Raha, Li, Yifan, Wang, Song, Li, Jundong, Chen, Tianlong, Liu, Huan
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs) by integrating external knowledge bases, improving their performance in applications like fact-checking and information searching. In this paper, we demonstrate a securi
Externí odkaz:
http://arxiv.org/abs/2406.19417
Few-shot Knowledge Graph (KG) Relational Reasoning aims to predict unseen triplets (i.e., query triplets) for rare relations in KGs, given only several triplets of these relations as references (i.e., support triplets). This task has gained significa
Externí odkaz:
http://arxiv.org/abs/2406.15507
Causality lays the foundation for the trajectory of our world. Causal inference (CI), which aims to infer intrinsic causal relations among variables of interest, has emerged as a crucial research topic. Nevertheless, the lack of observation of import
Externí odkaz:
http://arxiv.org/abs/2406.13966
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications. However, they are known to generate factually inaccurate outputs, a.k.a. the hallucination problem. In recent years, incorporating external knowledg
Externí odkaz:
http://arxiv.org/abs/2406.13862
Autor:
Gladstone, Alexi, Nanduru, Ganesh, Islam, Md Mofijul, Chadha, Aman, Li, Jundong, Iqbal, Tariq
One of the predominant methods for training world models is autoregressive prediction in the output space of the next element of a sequence. In Natural Language Processing (NLP), this takes the form of Large Language Models (LLMs) predicting the next
Externí odkaz:
http://arxiv.org/abs/2406.08862
In-context learning (ICL) empowers large language models (LLMs) to tackle new tasks by using a series of training instances as prompts. Since generating the prompts needs to sample from a vast pool of instances and annotate them (e.g., add labels in
Externí odkaz:
http://arxiv.org/abs/2406.03730
The ubiquity of large-scale graphs in node-classification tasks significantly hinders the real-world applications of Graph Neural Networks (GNNs). Node sampling, graph coarsening, and dataset condensation are effective strategies for enhancing data e
Externí odkaz:
http://arxiv.org/abs/2405.17404
Autor:
Wang, Song, Dong, Yushun, Zhang, Binchi, Chen, Zihan, Fu, Xingbo, He, Yinhan, Shen, Cong, Zhang, Chuxu, Chawla, Nitesh V., Li, Jundong
Graph Machine Learning (Graph ML) has witnessed substantial advancements in recent years. With their remarkable ability to process graph-structured data, Graph ML techniques have been extensively utilized across diverse applications, including critic
Externí odkaz:
http://arxiv.org/abs/2405.11034
Autor:
Wu, Xuansheng, Zhao, Haiyan, Zhu, Yaochen, Shi, Yucheng, Yang, Fan, Liu, Tianming, Zhai, Xiaoming, Yao, Wenlin, Li, Jundong, Du, Mengnan, Liu, Ninghao
Explainable AI (XAI) refers to techniques that provide human-understandable insights into the workings of AI models. Recently, the focus of XAI is being extended towards Large Language Models (LLMs) which are often criticized for their lack of transp
Externí odkaz:
http://arxiv.org/abs/2403.08946