Evaluation of Model Quantization Method on Vitis-AI for Mitigating Adversarial Examples

Autor: Yuta Fukuda, Kota Yoshida, Takeshi Fujino
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: IEEE Access, Vol 11, Pp 87200-87209 (2023)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2023.3305264
Popis: Adversarial examples (AEs) are typical model evasion attacks and security threats in deep neural networks (DNNs). One of the countermeasures is adversarial training (AT), and it trains DNNs by using a training dataset containing AEs to achieve robustness against AEs. On the other hand, the robustness obtained by AT greatly decreases when its parameters are quantized from a 32-bit float into an 8-bit integer to execute DNNs on edge devices with restricted hardware resources. Preliminary experiments in this study show that robustness is reduced by the fine-tuning process, in which a quantized model is trained with clean samples to reduce quantization errors. We propose quantization-aware adversarial training (QAAT) to address this problem, optimizing DNNs by conducting AT in quantization flow. In this study, we constructed a QAAT model using Vitis-AI provided by Xilinx. We implemented the QAAT model on the evaluation board ZCU104, equipped with Zynq UltraScale+, and demonstrate the robustness against AEs.
Databáze: Directory of Open Access Journals