Perturbation-enabled Deep Federated Learning for Preserving Internet of Things-based Social Networks

Autor: Sara Salim, Nour Moustafa, Benjamin Turnbull, Imran Razzak
Rok vydání: 2022
Předmět:
Zdroj: ACM Transactions on Multimedia Computing, Communications, and Applications. 18:1-19
ISSN: 1551-6865
1551-6857
DOI: 10.1145/3537899
Popis: Federated Learning (FL), as an emerging form of distributed machine learning (ML), can protect participants’ private data from being substantially disclosed to cyber adversaries. It has potential uses in many large-scale, data-rich environments, such as the Internet of Things (IoT), Industrial IoT, Social Media (SM), and the emerging SM 3.0. However, federated learning is susceptible to some forms of data leakage through model inversion attacks. Such attacks occur through the analysis of participants’ uploaded model updates. Model inversion attacks can reveal private data and potentially undermine some critical reasons for employing federated learning paradigms. This article proposes novel differential privacy (DP)-based deep federated learning framework. We theoretically prove that our framework can fulfill DP’s requirements under distinct privacy levels by appropriately adjusting scaled variances of Gaussian noise. We then develop a Differentially Private Data-Level Perturbation (DP-DLP) mechanism to conceal any single data point’s impact on the training phase. Experiments on real-world datasets, specifically the social media 3.0, Iris, and Human Activity Recognition (HAR) datasets, demonstrate that the proposed mechanism can offer high privacy, enhanced utility, and elevated efficiency. Consequently, it simplifies the development of various DP-based FL models with different tradeoff preferences on data utility and privacy levels.
Databáze: OpenAIRE