Zobrazeno 1 - 10
of 21
pro vyhledávání: '"MCMAHAN, BRENDAN"'
Autor:
Eichner, Hubert, Ramage, Daniel, Bonawitz, Kallista, Huba, Dzmitry, Santoro, Tiziano, McLarnon, Brett, Van Overveldt, Timon, Fallen, Nova, Kairouz, Peter, Cheu, Albert, Daly, Katharine, Gascon, Adria, Gruteser, Marco, McMahan, Brendan
Federated Learning and Analytics (FLA) have seen widespread adoption by technology platforms for processing sensitive on-device data. However, basic FLA systems have privacy limitations: they do not necessarily require anonymization mechanisms like d
Externí odkaz:
http://arxiv.org/abs/2404.10764
We study gradient descent under linearly correlated noise. Our work is motivated by recent practical methods for optimization with differential privacy (DP), such as DP-FTRL, which achieve strong performance in settings where privacy amplification te
Externí odkaz:
http://arxiv.org/abs/2302.01463
Autor:
Charles, Zachary, Bonawitz, Kallista, Chiknavaryan, Stanislav, McMahan, Brendan, Arcas, Blaise Agüera y
Federated learning (FL) is a framework for machine learning across heterogeneous client devices in a privacy-preserving fashion. To date, most FL algorithms learn a "global" server model across multiple rounds. At each round, the same server model is
Externí odkaz:
http://arxiv.org/abs/2208.09432
Motivated by recent applications requiring differential privacy over adaptive streams, we investigate the question of optimal instantiations of the matrix mechanism in this setting. We prove fundamental theoretical results on the applicability of mat
Externí odkaz:
http://arxiv.org/abs/2202.08312
We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires privacy amplification by sampling or shuffling to obtain the b
Externí odkaz:
http://arxiv.org/abs/2103.00039
Federated Learning enables mobile devices to collaboratively learn a shared inference model while keeping all the training data on a user's device, decoupling the ability to do machine learning from the need to store the data in the cloud. Existing w
Externí odkaz:
http://arxiv.org/abs/1912.00131
The discovery of heavy hitters (most frequent items) in user-generated data streams drives improvements in the app and web ecosystems, but can incur substantial privacy risks if not done with care. To address these risks, we propose a distributed and
Externí odkaz:
http://arxiv.org/abs/1902.08534
We suggest a general oracle-based framework that captures different parallel stochastic optimization settings described by a dependency graph, and derive generic lower bounds in terms of this graph. We then use the framework and derive lower bounds f
Externí odkaz:
http://arxiv.org/abs/1805.10222
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are distributed (unevenly) over an extremely large number of \nodes, but the goal remains to train a high-
Externí odkaz:
http://arxiv.org/abs/1511.03575
Publikováno v:
Communications of the ACM; Apr2022, Vol. 65 Issue 4, p90-97, 8p, 1 Illustration, 4 Diagrams, 1 Chart