On the (In)security of Peer-to-Peer Decentralized Machine Learning

Autor: Pasquini, Dario, Raynal, Mathilde, Troncoso, Carmela
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
Popis: In this work, we carry out the first, in-depth, privacy analysis of Decentralized Learning -- a collaborative machine learning framework aimed at addressing the main limitations of federated learning. We introduce a suite of novel attacks for both passive and active decentralized adversaries. We demonstrate that, contrary to what is claimed by decentralized learning proposers, decentralized learning does not offer any security advantage over federated learning. Rather, it increases the attack surface enabling any user in the system to perform privacy attacks such as gradient inversion, and even gain full control over honest users' local model. We also show that, given the state of the art in protections, privacy-preserving configurations of decentralized learning require fully connected networks, losing any practical advantage over the federated setup and therefore completely defeating the objective of the decentralized approach.
Comment: IEEE S&P'23 (Previous title: "On the Privacy of Decentralized Machine Learning") + Fixed error in neighbors-discovery trick
Databáze: arXiv