Autor: |
Cyffers, E. (Edwige), Even, M. (Mathieu), Bellet, A. (Aurélien), Massoulié, L. (Laurent) |
Přispěvatelé: |
S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh, Machine Learning in Information Networks [MAGNET], Dynamics of Geometric Networks [DYOGENE], Microsoft Research - Inria Joint Centre [MSR - INRIA] |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Popis: |
Decentralized optimization is increasingly popular in machine learning for its scalability and efficiency. Intuitively, it should also provide better privacy guarantees, as nodes only observe the messages sent by their neighbors in the network graph. But formalizing and quantifying this gain is challenging: existing results are typically limited to Local Differential Privacy (LDP) guarantees that overlook the advantages of decentralization. In this work, we introduce pairwise network differential privacy, a relaxation of LDP that captures the fact that the privacy leakage from a node $u$ to a node $v$ may depend on their relative position in the graph. We then analyze the combination of local noise injection with (simple or randomized) gossip averaging protocols on fixed and random communication graphs. We also derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging. Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph, matching the privacy-utility trade-off of the trusted curator, up to factors that explicitly depend on the graph topology. Finally, we illustrate our privacy gains with experiments on synthetic and real-world datasets. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|