Zobrazeno 1 - 10
of 503
pro vyhledávání: '"knowledge graph question answering"'
Knowledge graph question answering (KGQA) involves answering natural language questions by leveraging structured information stored in a knowledge graph. Typically, KGQA initially retrieve a targeted subgraph from a large-scale knowledge graph, which
Externí odkaz:
http://arxiv.org/abs/2410.01401
Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA). They typically require rewriting retrieved subgraphs into natural language formats comprehen
Externí odkaz:
http://arxiv.org/abs/2409.19753
Publikováno v:
International Conference on Applications of Natural Language to Information Systems, pages: 107-118, year: 2024, organization: Springer
While being one of the most popular question types, simple questions such as "Who is the author of Cinderella?", are still not completely solved. Surprisingly, even the most powerful modern Large Language Models are prone to errors when dealing with
Externí odkaz:
http://arxiv.org/abs/2409.15902
Knowledge Base Question Answering (KBQA) has been a long-standing field to answer questions based on knowledge bases. Recently, the evolving dynamics of knowledge have attracted a growing interest in Temporal Knowledge Graph Question Answering (TKGQA
Externí odkaz:
http://arxiv.org/abs/2406.14191
Publikováno v:
Jisuanji kexue yu tansuo, Vol 18, Iss 11, Pp 2887-2900 (2024)
Knowledge graph question answering (KGQA) is a technology that retrieves relevant answers from a knowledge graph by processing natural language questions posed by users. Early KGQA technologies were limited by the size of knowledge graphs, computatio
Externí odkaz:
https://doaj.org/article/fc4b21656f73454fa667d82d09efdde3
Large language models present opportunities for innovative Question Answering over Knowledge Graphs (KGQA). However, they are not inherently designed for query generation. To bridge this gap, solutions have been proposed that rely on fine-tuning or a
Externí odkaz:
http://arxiv.org/abs/2407.01409
We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of complex graph queryin
Externí odkaz:
http://arxiv.org/abs/2406.06621
OwnThink stands as the most extensive Chinese open-domain knowledge graph introduced in recent times. Despite prior attempts in question answering over OwnThink (OQA), existing studies have faced limitations in model representation capabilities, posi
Externí odkaz:
http://arxiv.org/abs/2406.02110
Large language models are often challenged by generating erroneous or `hallucinated' responses, especially in complex reasoning tasks. To mitigate this, we propose a retrieval augmented reasoning method, FiDeLiS, which enhances knowledge graph questi
Externí odkaz:
http://arxiv.org/abs/2405.13873
Autor:
Xu, Yao, He, Shizhu, Chen, Jiabei, Wang, Zihao, Song, Yangqiu, Tong, Hanghang, Liu, Guang, Liu, Kang, Zhao, Jun
To address the issues of insufficient knowledge and hallucination in Large Language Models (LLMs), numerous studies have explored integrating LLMs with Knowledge Graphs (KGs). However, these methods are typically evaluated on conventional Knowledge G
Externí odkaz:
http://arxiv.org/abs/2404.14741