Practical Defences Against Model Inversion Attacks for Split Neural Networks
Autor: | Titcombe, Tom, Hall, Adam J., Papadopoulos, Pavlos, Romanini, Daniele |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data. Comment: ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021) |
Databáze: | arXiv |
Externí odkaz: |