Adaptive Reinforcement Learning Planning: Harnessing Large Language Models for Complex Information Extraction

Autor: Ding, Zepeng, Ke, Ruiyang, Huang, Wenhao, Jiang, Guochao, Li, Yanda, Yang, Deqing, Liang, Jiaqing
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Existing research on large language models (LLMs) shows that they can solve information extraction tasks through multi-step planning. However, their extraction behavior on complex sentences and tasks is unstable, emerging issues such as false positives and missing elements. We observe that decomposing complex extraction tasks and extracting them step by step can effectively improve LLMs' performance, and the extraction orders of entities significantly affect the final results of LLMs. This paper proposes a two-stage multi-step method for LLM-based information extraction and adopts the RL framework to execute the multi-step planning. We regard sequential extraction as a Markov decision process, build an LLM-based extraction environment, design a decision module to adaptively provide the optimal order for sequential entity extraction on different sentences, and utilize the DDQN algorithm to train the decision model. We also design the rewards and evaluation metrics suitable for the extraction results of LLMs. We conduct extensive experiments on multiple public datasets to demonstrate the effectiveness of our method in improving the information extraction capabilities of LLMs.
Databáze: arXiv