Can Adversarial Weight Perturbations Inject Neural Backdoors?
Autor: | Yingyu Liang, Adarsh Kumar, Siddhant Garg, Vibhor Goel |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
0209 industrial biotechnology Computer Science - Machine Learning Computer Science - Cryptography and Security Computer science Perspective (graphical) Machine Learning (stat.ML) 02 engineering and technology Adversarial machine learning Adversary Outcome (probability) Machine Learning (cs.LG) Adversarial system 020901 industrial engineering & automation Statistics - Machine Learning Norm (mathematics) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Gradient descent Algorithm Cryptography and Security (cs.CR) Backdoor |
Zdroj: | CIKM |
Popis: | Adversarial machine learning has exposed several security hazards of neural models and has become an important research topic in recent times. Thus far, the concept of an "adversarial perturbation" has exclusively been used with reference to the input space referring to a small, imperceptible change which can cause a ML model to err. In this work we extend the idea of "adversarial perturbations" to the space of model weights, specifically to inject backdoors in trained DNNs, which exposes a security risk of using publicly available trained models. Here, injecting a backdoor refers to obtaining a desired outcome from the model when a trigger pattern is added to the input, while retaining the original model predictions on a non-triggered input. From the perspective of an adversary, we characterize these adversarial perturbations to be constrained within an $\ell_{\infty}$ norm around the original model weights. We introduce adversarial perturbations in the model weights using a composite loss on the predictions of the original model and the desired trigger through projected gradient descent. We empirically show that these adversarial weight perturbations exist universally across several computer vision and natural language processing tasks. Our results show that backdoors can be successfully injected with a very small average relative change in model weight values for several applications. Accepted as a conference paper at CIKM 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |