Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum

Autor: Haque, Nashrah, Li, Xiang, Chen, Zhehui, Wu, Yanzhao, Yu, Lei, Iyengar, Arun, Wei, Wenqi
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: We propose a novel framework, Stable Diffusion-based Momentum Integrated Adversarial Examples (SD-MIAE), for generating adversarial examples that can effectively mislead neural network classifiers while maintaining visual imperceptibility and preserving the semantic similarity to the original class label. Our method leverages the text-to-image generation capabilities of the Stable Diffusion model by manipulating token embeddings corresponding to the specified class in its latent space. These token embeddings guide the generation of adversarial images that maintain high visual fidelity. The SD-MIAE framework consists of two phases: (1) an initial adversarial optimization phase that modifies token embeddings to produce misclassified yet natural-looking images and (2) a momentum-based optimization phase that refines the adversarial perturbations. By introducing momentum, our approach stabilizes the optimization of perturbations across iterations, enhancing both the misclassification rate and visual fidelity of the generated adversarial examples. Experimental results demonstrate that SD-MIAE achieves a high misclassification rate of 79%, improving by 35% over the state-of-the-art method while preserving the imperceptibility of adversarial perturbations and the semantic similarity to the original class label, making it a practical method for robust adversarial evaluation.
Comment: 10 pages, 12 figures. To be published in IEEE TPS 2024 Proceedings. Code available on GitHub: https://github.com/nashrahhaque/SD-MIAE
Databáze: arXiv