LLM Instruction-Example Adaptive Prompting (LEAP) Framework for Clinical Relation Extraction.

Autor: Zhou H; Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, USA., Li M; Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, Minnesota, USA., Xiao Y; Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, USA., Yang H; Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, USA., Zhang R; Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, Minnesota, USA.
Jazyk: angličtina
Zdroj: MedRxiv : the preprint server for health sciences [medRxiv] 2023 Dec 17. Date of Electronic Publication: 2023 Dec 17.
DOI: 10.1101/2023.12.15.23300059
Abstrakt: Objective: To investigate the demonstration in Large Language Models (LLMs) for clinical relation extraction. We focus on examining two types of adaptive demonstration: instruction adaptive prompting, and example adaptive prompting to understand their impacts and effectiveness.
Materials and Methods: The study unfolds in two stages. Initially, we explored a range of demonstration components vital to LLMs' clinical data extraction, such as task descriptions and examples, and tested their combinations. Subsequently, we introduced the Instruction-Example Adaptive Prompting (LEAP) Framework, a system that integrates two types of adaptive prompts: one preceding instruction and another before examples. This framework is designed to systematically explore both adaptive task description and adaptive examples within the demonstration. We evaluated LEAP framework's performance on the DDI and BC5CDR chemical interaction datasets, applying it across LLMs such as Llama2-7b, Llama2-13b, and MedLLaMA_13B.
Results: The study revealed that Instruction + Options + Examples and its expanded form substantially raised F1-scores over the standard Instruction + Options mode. LEAP framework excelled, especially with example adaptive prompting that outdid traditional instruction tuning across models. Notably, the MedLLAMA-13b model scored an impressive 95.13 F1 on the BC5CDR dataset with this method. Significant improvements were also seen in the DDI 2013 dataset, confirming the method's robustness in sophisticated data extraction.
Conclusion: The LEAP framework presents a promising avenue for refining LLM training strategies, steering away from extensive finetuning towards more contextually rich and dynamic prompting methodologies.
Competing Interests: COMPETING INTERESTS STATEMENT The authors state that they have no competing interests to declare.
Databáze: MEDLINE