Moving target defense against adversarial attacks

Autor: WANG Bin, CHEN Liang, QIAN Yaguan, GUO Yankai, SHAO Qiqi, WANG Jiamin
Jazyk: English<br />Chinese
Rok vydání: 2021
Předmět:
Zdroj: 网络与信息安全学报, Vol 7, Iss 1, Pp 113-120 (2021)
Druh dokumentu: article
ISSN: 2096-109X
DOI: 10.11959/j.issn.2096-109x.2021012
Popis: Deep neural network has been successfully applied to image classification, but recent research work shows that deep neural network is vulnerable to adversarial attacks. A moving target defense method was proposed by means of dynamic switching model with a Bayes-Stackelberg game strategy, which could prevent an attacker from continuously obtaining consistent information and thus blocked its construction of adversarial examples. To improve the defense effect of the proposed method, the gradient consistency among the member models was taken as a measure to construct a new loss function in training for improving the difference among the member models. Experimental results show that the proposed method can improve the moving target defense performance of the image classification system and significantly reduce the attack success rate against the adversarial examples.
Databáze: Directory of Open Access Journals