Autor: |
Montserrat, Daniel Mas, Hao, Hanxiang, Yarlagadda, S. K., Baireddy, Sriram, Shao, Ruiting, Horváth, János, Bartusiak, Emily, Yang, Justin, Güera, David, Zhu, Fengqing, Delp, Edward J. |
Rok vydání: |
2020 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
Altered and manipulated multimedia is increasingly present and widely distributed via social media platforms. Advanced video manipulation tools enable the generation of highly realistic-looking altered multimedia. While many methods have been presented to detect manipulations, most of them fail when evaluated with data outside of the datasets used in research environments. In order to address this problem, the Deepfake Detection Challenge (DFDC) provides a large dataset of videos containing realistic manipulations and an evaluation system that ensures that methods work quickly and accurately, even when faced with challenging data. In this paper, we introduce a method based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) that extracts visual and temporal features from faces present in videos to accurately detect manipulations. The method is evaluated with the DFDC dataset, providing competitive results compared to other techniques. |
Databáze: |
arXiv |
Externí odkaz: |
|