Adversarial attack-based security vulnerability verification using deep learning library for multimedia video surveillance
Autor: | Sungmoon Kwon, Manpyo Hong, Jin Kwak, Jaehan Jeong, Taeshik Shon |
---|---|
Rok vydání: | 2019 |
Předmět: |
Contextual image classification
Computer Networks and Communications business.industry Computer science Deep learning 020207 software engineering Pattern recognition Sample (statistics) 02 engineering and technology Autoencoder Convolutional neural network Image (mathematics) symbols.namesake Hardware and Architecture Jacobian matrix and determinant 0202 electrical engineering electronic engineering information engineering Media Technology symbols Artificial intelligence business Software MNIST database |
Zdroj: | Multimedia Tools and Applications. 79:16077-16091 |
ISSN: | 1573-7721 1380-7501 |
DOI: | 10.1007/s11042-019-7262-8 |
Popis: | Recently, although deep learning has been employed in various fields, it poses the risk of a possible adversarial attack. In this study, we experimentally verified that classification accuracy in the image classification model of deep learning is lowered by adversarial samples generated by malicious attackers. We used the MNIST dataset, a representative image sample, and the NSL-KDD dataset, a representative network data. We measured the detection accuracy by injecting adversarial samples into the Autoencoder and Convolution Neural Network (CNN) classification models created using the TensorFlow and PyTorch libraries. Adversarial samples were generated by transforming the MNIST and NSL-KDD test datasets using the Jacobian-based Saliency Map Attack (JSMA) method and Fast Gradient Sign Method (FGSM). While measuring the accuracy by injecting the samples into the classification model, we verified that the detection accuracy was reduced by a minimum of 21.82% and a maximum of 39.08%. |
Databáze: | OpenAIRE |
Externí odkaz: |