Defending against Poisoning Backdoor Attacks on Federated Meta-learning
Autor: | Chien-Lun Chen, Sara Babakniya, Marco Paolieri, Leana Golubchik |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | ACM Transactions on Intelligent Systems and Technology. 13:1-25 |
ISSN: | 2157-6912 2157-6904 |
DOI: | 10.1145/3523062 |
Popis: | Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks : a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this article, we analyze the effects of backdoor attacks on federated meta-learning , where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even one-shot attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks , where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, the success and persistence of backdoor attacks are greatly reduced. |
Databáze: | OpenAIRE |
Externí odkaz: |