Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization
Autor: | Wang, Bao, Lin, Alex T., Zhu, Wei, Yin, Penghang, Bertozzi, Andrea L., Osher, Stanley J. |
---|---|
Rok vydání: | 2018 |
Předmět: | |
Zdroj: | Inverse Problems and Imaging, 2020 |
Druh dokumentu: | Working Paper |
Popis: | We improve the robustness of Deep Neural Net (DNN) to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation remarkably improves both the generalization and robustness of DNN. In the CIFAR10 benchmark, we raise the robust accuracy of the adversarially trained ResNet20 from $\sim 46\%$ to $\sim 69\%$ under the state-of-the-art Iterative Fast Gradient Sign Method (IFGSM) based adversarial attack. When we combine this data-dependent activation with total variation minimization on adversarial images and training data augmentation, we achieve an improvement in robust accuracy by 38.9$\%$ for ResNet56 under the strongest IFGSM attack. Furthermore, We provide an intuitive explanation of our defense by analyzing the geometry of the feature space. Comment: 17 pages, 6 figures |
Databáze: | arXiv |
Externí odkaz: |