Exposing influence campaigns in the age of LLMs: a behavioral-based AI approach to detecting state-sponsored trolls.
Autor: | Ezzeddine F; Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland.; Department of Applied Mathematics, Faculty of Science, Lebanese University, Beirut, Lebanon., Ayoub O; Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland., Giordano S; Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland., Nogara G; Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland., Sbeity I; Department of Applied Mathematics, Faculty of Science, Lebanese University, Beirut, Lebanon., Ferrara E; Information Sciences Institute, Viterbi School of Engineering, University of Southern California, Marina del Rey, CA USA., Luceri L; Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Lugano, Switzerland.; Information Sciences Institute, Viterbi School of Engineering, University of Southern California, Marina del Rey, CA USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | EPJ data science [EPJ Data Sci] 2023; Vol. 12 (1), pp. 46. Date of Electronic Publication: 2023 Oct 09. |
DOI: | 10.1140/epjds/s13688-023-00423-4 |
Abstrakt: | The detection of state-sponsored trolls operating in influence campaigns on social media is a critical and unsolved challenge for the research community, which has significant implications beyond the online realm. To address this challenge, we propose a new AI-based solution that identifies troll accounts solely through behavioral cues associated with their sequences of sharing activity, encompassing both their actions and the feedback they receive from others. Our approach does not incorporate any textual content shared and consists of two steps: First, we leverage an LSTM-based classifier to determine whether account sequences belong to a state-sponsored troll or an organic, legitimate user. Second, we employ the classified sequences to calculate a metric named the "Troll Score", quantifying the degree to which an account exhibits troll-like behavior. To assess the effectiveness of our method, we examine its performance in the context of the 2016 Russian interference campaign during the U.S. Presidential election. Our experiments yield compelling results, demonstrating that our approach can identify account sequences with an AUC close to 99% and accurately differentiate between Russian trolls and organic users with an AUC of 91%. Notably, our behavioral-based approach holds a significant advantage in the ever-evolving landscape, where textual and linguistic properties can be easily mimicked by Large Language Models (LLMs): In contrast to existing language-based techniques, it relies on more challenging-to-replicate behavioral cues, ensuring greater resilience in identifying influence campaigns, especially given the potential increase in the usage of LLMs for generating inauthentic content. Finally, we assessed the generalizability of our solution to various entities driving different information operations and found promising results that will guide future research. Competing Interests: Competing interestsThe authors declare that they have no competing interests. (© Springer-Verlag GmbH, DE 2023.) |
Databáze: | MEDLINE |
Externí odkaz: |