Attribution-driven Causal Analysis for Detection of Adversarial Examples
Autor: | Jha, Susmit, Raj, Sunny, Fernandes, Steven Lawrence, Jha, Sumit Kumar, Jha, Somesh, Verma, Gunjan, Jalaian, Brian, Swami, Ananthram |
---|---|
Rok vydání: | 2019 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Attribution methods have been developed to explain the decision of a machine learning model on a given input. We use the Integrated Gradient method for finding attributions to define the causal neighborhood of an input by incrementally masking high attribution features. We study the robustness of machine learning models on benign and adversarial inputs in this neighborhood. Our study indicates that benign inputs are robust to the masking of high attribution features but adversarial inputs generated by the state-of-the-art adversarial attack methods such as DeepFool, FGSM, CW and PGD, are not robust to such masking. Further, our study demonstrates that this concentration of high-attribution features responsible for the incorrect decision is more pronounced in physically realizable adversarial examples. This difference in attribution of benign and adversarial inputs can be used to detect adversarial examples. Such a defense approach is independent of training data and attack method, and we demonstrate its effectiveness on digital and physically realizable perturbations. Comment: 11 pages, 6 figures |
Databáze: | arXiv |
Externí odkaz: |