BERTective: Language Models and Contextual Information for Deception Detection

Autor: Massimo Poesio, Dirk Hovy, Federico Bianchi, Tommaso Fornaciari
Rok vydání: 2021
Předmět:
Zdroj: EACL
Scopus-Elsevier
DOI: 10.18653/v1/2021.eacl-main.232
Popis: Spotting a lie is challenging but has an enormous potential impact on security as well as private and public safety. Several NLP methods have been proposed to classify texts as truthful or deceptive. In most cases, however, the target texts' preceding context is not considered. This is a severe limitation, as any communication takes place in context, not in a vacuum, and context can help to detect deception. We study a corpus of Italian dialogues containing deceptive statements and implement deep neural models that incorporate various linguistic contexts. We establish a new state-of-the-art identifying deception and find that not all context is equally useful to the task. Only the texts closest to the target, if from the same speaker (rather than questions by an interlocutor), boost performance. We also find that the semantic information in language models such as BERT contributes to the performance. However, BERT alone does not capture the implicit knowledge of deception cues: its contribution is conditional on the concurrent use of attention to learn cues from BERT's representations.
Databáze: OpenAIRE