Autor: |
Shahid Karim, Xin Liu, Abdullah Ayub Khan, Asif Ali Laghari, Akeel Qadir, Irfana Bibi |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Scientific Reports, Vol 14, Iss 1, Pp 1-20 (2024) |
Druh dokumentu: |
article |
ISSN: |
2045-2322 |
DOI: |
10.1038/s41598-024-80842-z |
Popis: |
Abstract The proliferation of multimedia-based deepfake content in recent years has posed significant challenges to information security and authenticity, necessitating the use of methods beyond dependable dynamic detection. In this paper, we utilize the powerful combination of Deep Generative Adversarial Networks (GANs) and Transfer Learning (TL) to introduce a new technique for identifying deepfakes in multimedia systems. Each of the GAN architectures may be customized to detect subtle changes in different multimedia formats by combining their advantages. A multi-collaborative framework called “MCGAN” is developed because it contains audio, video, and image files. This framework is compared to other state-of-the-art techniques to estimate the overall fluctuation based on performance, improving the accuracy rate by up to 17.333% and strengthening the deepfake detection hierarchy. In order to accelerate the training process overall and enable the system to respond rapidly to novel patterns that indicate deepfakes, TL employs the pre-train technique on the same databases. When it comes to identifying the contents of deepfakes, the proposed method performs quite well. In a range of multimedia scenarios, this enhances real-time detection capabilities while preserving a high level of accuracy. A progressive hierarchy that ensures information integrity in the digital world and related research is taken into consideration in this development. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|