Digestive neural networks: A novel defense strategy against inference attacks in federated learning
Autor: | Seyoung Ahn, Sunghyun Cho, Hongkyu Lee, Jeehyeong Kim, Junggab Son, Rasheed Hussain |
---|---|
Rok vydání: | 2021 |
Předmět: |
General Computer Science
Computer science Inference attack Inference 02 engineering and technology Machine learning computer.software_genre ML Security 0202 electrical engineering electronic engineering information engineering Differential privacy AI Security t-SNE analysis Edge computing Artificial neural network business.industry Process (computing) White-box assumption 020206 networking & telecommunications Eavesdropping Digestive neural networks Federated learning security Decentralized computing Federated learning (FL) 020201 artificial intelligence & image processing Artificial intelligence Noise (video) business Law computer |
Zdroj: | Lee, H, Kim, J, Ahn, S, Hussain, R, Cho, S & Son, J 2021, ' Digestive neural networks : A novel defense strategy against inference attacks in federated learning ', Computers and Security, vol. 109, 102378 . https://doi.org/10.1016/j.cose.2021.102378 |
ISSN: | 0167-4048 |
DOI: | 10.1016/j.cose.2021.102378 |
Popis: | Federated Learning (FL) is an efficient and secure machine learning technique designed for decentralized computing systems such as fog and edge computing. Its learning process employs frequent communications as the participating local devices send updates, either gradients or parameters of their models, to a central server that aggregates them and redistributes new weights to the devices. In FL, private data does not leave the individual local devices, and thus, rendered as a robust solution in terms of privacy preservation. However, the recently introduced membership inference attacks pose a critical threat to the impeccability of FL mechanisms. By eavesdropping only on the updates transferring to the center server, these attacks can recover the private data of a local device. A prevalent solution against such attacks is the differential privacy scheme that augments a sufficient amount of noise to each update to hinder the recovering process. However, it suffers from a significant sacrifice in the classification accuracy of the FL. To effectively alleviate the problem, this paper proposes a Digestive Neural Network (DNN), an independent neural network attached to the FL. The private data owned by each device will pass through the DNN and then train the FL. The DNN modifies the input data, which results in distorting updates, in a way to maximize the classification accuracy of FL while the accuracy of inference attacks is minimized. Our simulation result shows that the proposed DNN shows significant performance on both gradient sharing- and weight sharing-based FL mechanisms. For the gradient sharing, the DNN achieved higher classification accuracy by 16.17% while 9% lower attack accuracy than the existing differential privacy schemes. For the weight sharing FL scheme, the DNN achieved at most 46.68% lower attack success rate with 3% higher classification accuracy. |
Databáze: | OpenAIRE |
Externí odkaz: |