A knowledge distillation strategy for enhancing the adversarial robustness of lightweight automatic modulation classification models

Autor: Fanghao Xu, Chao Wang, Jiakai Liang, Chenyang Zuo, Keqiang Yue, Wenjun Li
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: IET Communications, Vol 18, Iss 14, Pp 827-845 (2024)
Druh dokumentu: article
ISSN: 1751-8636
1751-8628
DOI: 10.1049/cmu2.12793
Popis: Abstract Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully crafted adversarial interference to the transmitted signal. Based on the requirements of efficient computing and edge deployment, a lightweight automatic modulation classification model is proposed. Considering that the lightweight automatic modulation classification model is more susceptible to interference from adversarial attacks and that adversarial training of the lightweight auto‐modulation classification model fails to achieve the desired results, an adversarial attack defense system for the lightweight automatic modulation classification model is further proposed, which can enhance the robustness when subjected to adversarial attacks. The defense method aims to transfer the adversarial robustness from a trained large automatic modulation classification model to a lightweight model through the technique of adversarial robust distillation. The proposed method exhibits better adversarial robustness than current defense techniques in feature fusion based automatic modulation classification models in white box attack scenarios.
Databáze: Directory of Open Access Journals