Demystifying the transferability of adversarial attacks in computer networks

Autor: Ehsan Nowroozi, Yassine Mekdad, Mohammad Hajian Berenjestanaki, Mauro Conti, Abdeslam El Fergougui
Jazyk: angličtina
Rok vydání: 2022
Předmět:
FOS: Computer and information sciences
Computer Science - Machine Learning
Cybersecurity
Computer Science - Cryptography and Security
Computer Science - Artificial Intelligence
Computer Networks and Communications
Computer Vision and Pattern Recognition (cs.CV)
Adversarial examples
Adversarial machine learning
Attack transferability
Botnet
Computational modeling
Computer networks
Convolutional neural networks
Deep learning
Machine and Deep learning
Malware
Neural networks
Computer Science - Computer Vision and Pattern Recognition
Machine Learning (cs.LG)
Computer Science - Networking and Internet Architecture
T Technology (General)
T58.5 Information technology
Electrical and Electronic Engineering
Networking and Internet Architecture (cs.NI)
QA075 Electronic computers. Computer science
Artificial Intelligence (cs.AI)
QA076 Computer software
Cryptography and Security (cs.CR)
DOI: 10.1109/TNSM.2022.3164354
Popis: Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. This major property is known as transferability, and makes CNNs ill-suited for security applications. In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. Furthermore, we investigate whether the transferability property issue holds in computer networks applications. In our experiments, we first consider five different attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L- BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then, we perform these attacks against three well- known datasets: the Network-based Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA) dataset, and the RIPE Atlas dataset. Our experimental results show clearly that the transferability happens in specific use cases for the I- FGSM, the JSMA, and the LBFGS attack. In such scenarios, the attack success rate on the target network range from 63.00% to 100%. Finally, we suggest two shielding strategies to hinder the attack transferability, by considering the Most Powerful Attacks (MPAs), and the mismatch LSTM architecture.
14 pages
Databáze: OpenAIRE