Enhancing Relation Extraction via Supervised Rationale Verification and Feedback
Autor: | Li, Yongqi, Miao, Xin, Zhou, Shen, Xu, Mayi, Ren, Yuyang, Qian, Tieyun |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Despite the rapid progress that existing automated feedback methods have made in correcting the output of large language models (LLMs), these methods cannot be well applied to the relation extraction (RE) task due to their designated feedback objectives and correction manner. To address this problem, we propose a novel automated feedback framework for RE, which presents a rationale supervisor to verify the rationale and provides re-selected demonstrations as feedback to correct the initial prediction. Specifically, we first design a causal intervention and observation method to collect biased/unbiased rationales for contrastive training the rationale supervisor. Then, we present a verification-feedback-correction procedure to iteratively enhance LLMs' capability of handling the RE task. Extensive experiments prove that our proposed framework significantly outperforms existing methods. Comment: Accepted to AAAI 2025, camera ready version |
Databáze: | arXiv |
Externí odkaz: |