Temporal Feature Prediction in Audio–Visual Deepfake Detection.

Autor: Gao, Yuan, Wang, Xuelong, Zhang, Yu, Zeng, Ping, Ma, Yingjie
Zdroj: Electronics (2079-9292); Sep2024, Vol. 13 Issue 17, p3433, 15p
Abstrakt: The rapid growth of deepfake technology, generating realistic manipulated media, poses a significant threat due to potential misuse. Therefore, effective detection methods are urgently needed to prevent malicious use, as current approaches often focus on single modalities or the simple fusion of audio–visual signals, limiting their accuracy. To solve this problem, we propose a deepfake detection scheme based on bimodal temporal feature prediction, which innovatively introduces the idea of temporal feature prediction into the audio–video bimodal deepfake detection task, aiming at fully exploiting the temporal laws of audio–visual modalities. First, pairs of adjacent audio–video sequence clips are used to construct input quadruples, and a dual-stream network is employed to extract temporal feature representations from video and audio, respectively. A video prediction module and an audio prediction module are designed to capture the temporal inconsistencies within each single modality by predicting future temporal features and comparing them with reference features. Then, a projection layer network is designed to align the audio–visual features, using contrastive loss functions to perform contrastive learning and maximize the differences between real and fake video modalities. Experiments on the FakeAVCeleb dataset demonstrate superior performance with an accuracy of 84.33% and an AUC of 89.91%, outperforming existing methods and confirming the effectiveness of our approach in deepfake detection. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index