Layer-wise Characterization of Latent Information Leakage in Federated Learning

Autor: Mo, Fan, Borovykh, Anastasia, Malekzadeh, Mohammad, Haddadi, Hamed, Demetriou, Soteris
Rok vydání: 2020
Předmět:
Druh dokumentu: Working Paper
Popis: Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data. Prior work has demonstrated that in practice a client's private information, unrelated to the main learning task, can be discovered from the model's gradients, which compromises the promised privacy protection. However, there is still no formal approach for quantifying the leakage of private information via the shared updated model or gradients. In this work, we analyze property inference attacks and define two metrics based on (i) an adaptation of the empirical $\mathcal{V}$-information, and (ii) a sensitivity analysis using Jacobian matrices allowing us to measure changes in the gradients with respect to latent information. We show the applicability of our proposed metrics in localizing private latent information in a layer-wise manner and in two settings where (i) we have or (ii) we do not have knowledge of the attackers' capabilities. We evaluate the proposed metrics for quantifying information leakage on three real-world datasets using three benchmark models.
Comment: 9 pages, at ICLR workshop (Distributed and Private Machine Learning)
Databáze: arXiv