Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Rush, Keith"'
Autor:
Charles, Zachary, Ganesh, Arun, McKenna, Ryan, McMahan, H. Brendan, Mitchell, Nicole, Pillutla, Krishna, Rush, Keith
We investigate practical and scalable algorithms for training large language models (LLMs) with user-level differential privacy (DP) in order to provably safeguard all the examples contributed by each user. We study two variants of DP-SGD with: (1) e
Externí odkaz:
http://arxiv.org/abs/2407.07737
Autor:
Wang, Congchao, Augenstein, Sean, Rush, Keith, Jitkrittum, Wittawat, Narasimhan, Harikrishna, Rawat, Ankit Singh, Menon, Aditya Krishna, Go, Alec
Reducing serving cost and latency is a fundamental concern for the deployment of language models (LMs) in business applications. To address this, cascades of LMs offer an effective solution that conditionally employ smaller models for simpler queries
Externí odkaz:
http://arxiv.org/abs/2406.00060
We present DrJAX, a JAX-based library designed to support large-scale distributed and parallel machine learning algorithms that use MapReduce-style operations. DrJAX leverages JAX's sharding mechanisms to enable native targeting of TPUs and state-of-
Externí odkaz:
http://arxiv.org/abs/2403.07128
Autor:
Choquette-Choo, Christopher A., Ganesh, Arun, McKenna, Ryan, McMahan, H. Brendan, Rush, Keith, Thakurta, Abhradeep, Xu, Zheng
Matrix factorization (MF) mechanisms for differential privacy (DP) have substantially improved the state-of-the-art in privacy-utility-computation tradeoffs for ML applications in a variety of scenarios, but in both the centralized and federated sett
Externí odkaz:
http://arxiv.org/abs/2306.08153
We study gradient descent under linearly correlated noise. Our work is motivated by recent practical methods for optimization with differential privacy (DP), such as DP-FTRL, which achieve strong performance in settings where privacy amplification te
Externí odkaz:
http://arxiv.org/abs/2302.01463
Federated learning (FL) is a general framework for learning across heterogeneous clients while preserving data privacy, under the orchestration of a central server. FL methods often compute gradients of loss functions purely locally (ie. entirely at
Externí odkaz:
http://arxiv.org/abs/2301.07806
We introduce new differentially private (DP) mechanisms for gradient-based machine learning (ML) with multiple passes (epochs) over a dataset, substantially improving the achievable privacy-utility-computation tradeoffs. We formalize the problem of D
Externí odkaz:
http://arxiv.org/abs/2211.06530
Motivated by recent applications requiring differential privacy over adaptive streams, we investigate the question of optimal instantiations of the matrix mechanism in this setting. We prove fundamental theoretical results on the applicability of mat
Externí odkaz:
http://arxiv.org/abs/2202.08312
Autor:
Charles, Zachary, Rush, Keith
We study whether iterated vector fields (vector fields composed with themselves) are conservative. We give explicit examples of vector fields for which this self-composition preserves conservatism. Notably, this includes gradient vector fields of los
Externí odkaz:
http://arxiv.org/abs/2109.03973
Autor:
Singhal, Karan, Sidahmed, Hakim, Garrett, Zachary, Wu, Shanshan, Rush, Keith, Prakash, Sushant
Personalization methods in federated learning aim to balance the benefits of federated and local training for data availability, communication cost, and robustness to client heterogeneity. Approaches that require clients to communicate all model para
Externí odkaz:
http://arxiv.org/abs/2102.03448