Autor: |
Dhairya Vyas, Viral V. Kapadia |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
PeerJ Computer Science, Vol 10, p e1868 (2024) |
Druh dokumentu: |
article |
ISSN: |
2376-5992 |
DOI: |
10.7717/peerj-cs.1868 |
Popis: |
Adversarial attacks pose a significant challenge to deep neural networks used in image classification systems. Although deep learning has achieved impressive success in various tasks, it can easily be deceived by adversarial patches created by adding subtle yet deliberate distortions to natural images. These attacks are designed to remain hidden from both human and computer-based classifiers. Considering this, we propose novel model designs that enhance adversarial strength with incorporating feature denoising blocks. Exclusively, proposed model utilizes Gaussian data augmentation (GDA) and spatial smoothing (SS) to denoise the features. These techniques are reasonable and can be mixed in a joint finding context to accomplish superior recognition levels versus adversarial assaults while also balancing other defenses. We tested the proposed approach on the ImageNet and CIFAR-10 datasets using 10-iteration projected gradient descent (PGD), fast gradient sign method (FGSM), and DeepFool attacks. The proposed method achieved an accuracy of 95.62% in under four minutes, which is highly competitive compared to existing approaches. We also conducted a comparative analysis with existing methods. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|