Classifier Guidance Enhances Diffusion-based Adversarial Purification by Preserving Predictive Information

Autor: Zhang, Mingkun, Li, Jianing, Chen, Wei, Guo, Jiafeng, Cheng, Xueqi
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Adversarial purification is one of the promising approaches to defend neural networks against adversarial attacks. Recently, methods utilizing diffusion probabilistic models have achieved great success for adversarial purification in image classification tasks. However, such methods fall into the dilemma of balancing the needs for noise removal and information preservation. This paper points out that existing adversarial purification methods based on diffusion models gradually lose sample information during the core denoising process, causing occasional label shift in subsequent classification tasks. As a remedy, we suggest to suppress such information loss by introducing guidance from the classifier confidence. Specifically, we propose Classifier-cOnfidence gUided Purification (COUP) algorithm, which purifies adversarial examples while keeping away from the classifier decision boundary. Experimental results show that COUP can achieve better adversarial robustness under strong attack methods.
Comment: Accepted by ECAI 2024
Databáze: arXiv