Zobrazeno 1 - 10
of 119
pro vyhledávání: '"Wu, Yike"'
Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA). They typically require rewriting retrieved subgraphs into natural language formats comprehen
Externí odkaz:
http://arxiv.org/abs/2409.19753
In the realm of data-driven AI technology, the application of open-source large language models (LLMs) in robotic task planning represents a significant milestone. Recent robotic task planning methods based on open-source LLMs typically leverage vast
Externí odkaz:
http://arxiv.org/abs/2403.18760
The attribution of question answering is to provide citations for supporting generated statements, and has attracted wide research attention. The current methods for automatically evaluating the attribution, which are often based on Large Language Mo
Externí odkaz:
http://arxiv.org/abs/2401.14640
Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering
Despite their competitive performance on knowledge-intensive tasks, large language models (LLMs) still have limitations in memorizing all world knowledge especially long tail knowledge. In this paper, we study the KG-augmented language model approach
Externí odkaz:
http://arxiv.org/abs/2309.11206
Autor:
Hu, Mengting, Bai, Yinhao, Wu, Yike, Zhang, Zhen, Zhang, Liqi, Gao, Hang, Zhao, Shiwan, Huang, Minlie
Recently, aspect sentiment quad prediction has received widespread attention in the field of aspect-based sentiment analysis. Existing studies extract quadruplets via pre-trained generative language models to paraphrase the original sentence into a t
Externí odkaz:
http://arxiv.org/abs/2306.00418
Entity Alignment (EA) aims to find the equivalent entities between two Knowledge Graphs (KGs). Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings, which prevents the direct interaction between the
Externí odkaz:
http://arxiv.org/abs/2305.11501
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream ta
Externí odkaz:
http://arxiv.org/abs/2303.10368
Autor:
Gao, Fengli1 (AUTHOR) flgao@aynu.edu.cn, Wu, Yike1 (AUTHOR) 19711083859@163.com, Gan, Cui1 (AUTHOR) m17796544225@163.com, Hou, Yupeng1 (AUTHOR) 16692208021@163.com, Deng, Dehua1 (AUTHOR) ddh@aynu.edu.cn, Yi, Xinyao2 (AUTHOR) ddh@aynu.edu.cn
Publikováno v:
Sensors (14248220). Oct2024, Vol. 24 Issue 19, p6458. 42p.
Recently, aspect sentiment quad prediction (ASQP) has become a popular task in the field of aspect-level sentiment analysis. Previous work utilizes a predefined template to paraphrase the original sentence into a structure target sequence, which can
Externí odkaz:
http://arxiv.org/abs/2210.10291
Multimodal knowledge graph completion (MKGC) aims to predict missing entities in MKGs. Previous works usually share relation representation across modalities. This results in mutual interference between modalities during training, since for a pair of
Externí odkaz:
http://arxiv.org/abs/2210.08821