Robust VAEs via Generating Process of Noise Augmented Data

Autor: Irobe, Hiroo, Aoki, Wataru, Yamazaki, Kimihiro, Zhang, Yuhui, Nakagawa, Takumi, Waida, Hiroki, Wada, Yuichiro, Kanamori, Takafumi
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Advancing defensive mechanisms against adversarial attacks in generative models is a critical research topic in machine learning. Our study focuses on a specific type of generative models - Variational Auto-Encoders (VAEs). Contrary to common beliefs and existing literature which suggest that noise injection towards training data can make models more robust, our preliminary experiments revealed that naive usage of noise augmentation technique did not substantially improve VAE robustness. In fact, it even degraded the quality of learned representations, making VAEs more susceptible to adversarial perturbations. This paper introduces a novel framework that enhances robustness by regularizing the latent space divergence between original and noise-augmented data. Through incorporating a paired probabilistic prior into the standard variational lower bound, our method significantly boosts defense against adversarial attacks. Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs on widely-recognized benchmark datasets.
Databáze: arXiv