An explainable deepfake detection framework on a novel unconstrained dataset
Autor: | Sherin Mathews, Shivangee Trivedi, Amanda House, Steve Povolny, Celeste Fralick |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | Complex & Intelligent Systems, Vol 9, Iss 4, Pp 4425-4437 (2023) |
Druh dokumentu: | article |
ISSN: | 2199-4536 2198-6053 |
DOI: | 10.1007/s40747-022-00956-7 |
Popis: | Abstract In this work, we created a new large-scale unconstrained high-quality Deepfake Image (DFIM-HQ) dataset containing 140K images. Compared to existing datasets, this dataset includes a variety of diverse scenarios, pose variations, high-quality degradations, and illumination variations, making it a particularly challenging dataset. Since computer vision models learn to perform a task by capturing relevant statistics from training data, they tend to learn spurious age, gender, and race correlations leading to learning biases. To account for AI bias in our proposed DFIM-HQ dataset, we design a simple yet effective image recognition benchmark for studying bias mitigation. Our detection system makes use of an Inception-based network to extract frame-level features and automatically detect manipulated content. We also propose an explainability framework that provides a better understanding of the model’s prediction. Such informed decisions provide insights that can be used to improve the model and, thereby, helps to add trust to the model. Our evaluation illustrates that our frameworks can achieve competitive results in detecting deepfake images using deep learning architectures. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |