Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection
Autor: | Hu, Beizhe, Sheng, Qiang, Cao, Juan, Shi, Yuhui, Li, Yang, Wang, Danding, Qi, Peng |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | AAAI 2024 |
Druh dokumentu: | Working Paper |
DOI: | 10.1609/aaai.v38i20.30214 |
Popis: | Detecting fake news requires both a delicate sense of diverse clues and a profound understanding of the real-world background, which remains challenging for detectors based on small language models (SLMs) due to their knowledge and capability limitations. Recent advances in large language models (LLMs) have shown remarkable performance in various tasks, but whether and how LLMs could help with fake news detection remains underexplored. In this paper, we investigate the potential of LLMs in fake news detection. First, we conduct an empirical study and find that a sophisticated LLM such as GPT 3.5 could generally expose fake news and provide desirable multi-perspective rationales but still underperforms the basic SLM, fine-tuned BERT. Our subsequent analysis attributes such a gap to the LLM's inability to select and integrate rationales properly to conclude. Based on these findings, we propose that current LLMs may not substitute fine-tuned SLMs in fake news detection but can be a good advisor for SLMs by providing multi-perspective instructive rationales. To instantiate this proposal, we design an adaptive rationale guidance network for fake news detection (ARG), in which SLMs selectively acquire insights on news analysis from the LLMs' rationales. We further derive a rationale-free version of ARG by distillation, namely ARG-D, which services cost-sensitive scenarios without querying LLMs. Experiments on two real-world datasets demonstrate that ARG and ARG-D outperform three types of baseline methods, including SLM-based, LLM-based, and combinations of small and large language models. Comment: 16 pages, 5 figures, and 9 tables. To appear at AAAI 2024 |
Databáze: | arXiv |
Externí odkaz: |