Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks

Autor: Waghela, Hetvi, Sen, Jaydip, Rakshit, Sneha
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Adversarial attacks, particularly the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) pose significant threats to the robustness of deep learning models in image classification. This paper explores and refines defense mechanisms against these attacks to enhance the resilience of neural networks. We employ a combination of adversarial training and innovative preprocessing techniques, aiming to mitigate the impact of adversarial perturbations. Our methodology involves modifying input data before classification and investigating different model architectures and training strategies. Through rigorous evaluation of benchmark datasets, we demonstrate the effectiveness of our approach in defending against FGSM and PGD attacks. Our results show substantial improvements in model robustness compared to baseline methods, highlighting the potential of our defense strategies in real-world applications. This study contributes to the ongoing efforts to develop secure and reliable machine learning systems, offering practical insights and paving the way for future research in adversarial defense. By bridging theoretical advancements and practical implementation, we aim to enhance the trustworthiness of AI applications in safety-critical domains.
Comment: This is the preprint of the paper that has been accepted for oral presentation and publication in the Proceedings of the IEEE Asian Conference on Intelligent Technologies (ACOIT'2014). The conference will be organized in Kolar, Karnataka, INDIA from September 6 to 7, 2024. The paper is 8 pages long, and it contains 9 Figures and 4 Tables. This is NOT the final version of the paper
Databáze: arXiv