KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation

Autor: Liang, Lei, Sun, Mengshu, Gui, Zhengke, Zhu, Zhongshu, Jiang, Zhouyu, Zhong, Ling, Qu, Yuan, Zhao, Peilong, Bo, Zhongpu, Yang, Jin, Xiong, Huaidong, Yuan, Lin, Xu, Jun, Wang, Zaoyang, Zhang, Zhiqiang, Zhang, Wen, Chen, Huajun, Chen, Wenguang, Zhou, Jun
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: The recently developed retrieval-augmented generation (RAG) technology has enabled the efficient construction of domain-specific applications. However, it also has limitations, including the gap between vector similarity and the relevance of knowledge reasoning, as well as insensitivity to knowledge logic, such as numerical values, temporal relations, expert rules, and others, which hinder the effectiveness of professional knowledge services. In this work, we introduce a professional domain knowledge service framework called Knowledge Augmented Generation (KAG). KAG is designed to address the aforementioned challenges with the motivation of making full use of the advantages of knowledge graph(KG) and vector retrieval, and to improve generation and reasoning performance by bidirectionally enhancing large language models (LLMs) and KGs through five key aspects: (1) LLM-friendly knowledge representation, (2) mutual-indexing between knowledge graphs and original chunks, (3) logical-form-guided hybrid reasoning engine, (4) knowledge alignment with semantic reasoning, and (5) model capability enhancement for KAG. We compared KAG with existing RAG methods in multihop question answering and found that it significantly outperforms state-of-theart methods, achieving a relative improvement of 19.6% on 2wiki and 33.5% on hotpotQA in terms of F1 score. We have successfully applied KAG to two professional knowledge Q&A tasks of Ant Group, including E-Government Q&A and E-Health Q&A, achieving significant improvement in professionalism compared to RAG methods.
Comment: 33 pages
Databáze: arXiv