Zobrazeno 1 - 10
of 350
pro vyhledávání: '"Pillutla P"'
Autor:
Charles, Zachary, Ganesh, Arun, McKenna, Ryan, McMahan, H. Brendan, Mitchell, Nicole, Pillutla, Krishna, Rush, Keith
We investigate practical and scalable algorithms for training large language models (LLMs) with user-level differential privacy (DP) in order to provably safeguard all the examples contributed by each user. We study two variants of DP-SGD with: (1) e
Externí odkaz:
http://arxiv.org/abs/2407.07737
Autor:
Dogra, Atharvan, Pillutla, Krishna, Deshpande, Ameet, Sai, Ananya B, Nay, John, Rajpurohit, Tanmay, Kalyan, Ashwin, Ravindran, Balaraman
We explore the ability of large language model (LLM)-based agents to engage in subtle deception such as strategically phrasing and intentionally manipulating information to misguide and deceive other agents. This harmful behavior can be hard to detec
Externí odkaz:
http://arxiv.org/abs/2405.04325
Autor:
Dvijotham, Krishnamurthy, McMahan, H. Brendan, Pillutla, Krishna, Steinke, Thomas, Thakurta, Abhradeep
In the task of differentially private (DP) continual counting, we receive a stream of increments and our goal is to output an approximate running total of these increments, without revealing too much about any specific increment. Despite its simplici
Externí odkaz:
http://arxiv.org/abs/2404.16706
We consider the distributionally robust optimization (DRO) problem with spectral risk-based uncertainty set and $f$-divergence penalty. This formulation includes common risk-sensitive learning objectives such as regularized condition value-at-risk (C
Externí odkaz:
http://arxiv.org/abs/2310.13863
Autor:
Kandpal, Nikhil, Pillutla, Krishna, Oprea, Alina, Kairouz, Peter, Choquette-Choo, Christopher A., Xu, Zheng
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications. In this paper, we study the privacy implications of fine-tuning LLMs on user data. To this end, we consider a realistic thr
Externí odkaz:
http://arxiv.org/abs/2310.09266
Autor:
Choquette-Choo, Christopher A., Dvijotham, Krishnamurthy, Pillutla, Krishna, Ganesh, Arun, Steinke, Thomas, Thakurta, Abhradeep
Publikováno v:
ICLR 2024
Differentially private learning algorithms inject noise into the learning process. While the most common private learning algorithm, DP-SGD, adds independent Gaussian noise in each iteration, recent work on matrix factorization mechanisms has shown e
Externí odkaz:
http://arxiv.org/abs/2310.06771
Publikováno v:
NeurIPS 2023 (Datasets & Benchmarks)
We introduce Dataset Grouper, a library to create large-scale group-structured (e.g., federated) datasets, enabling federated learning simulation at the scale of foundation models. This library facilitates the creation of group-structured versions of
Externí odkaz:
http://arxiv.org/abs/2307.09619
Autor:
Pillutla, Krishna, Andrew, Galen, Kairouz, Peter, McMahan, H. Brendan, Oprea, Alina, Oh, Sewoong
We present a rigorous methodology for auditing differentially private machine learning algorithms by adding multiple carefully designed examples called canaries. We take a first principles approach based on three key components. First, we introduce L
Externí odkaz:
http://arxiv.org/abs/2305.18447
Gauss-Newton methods and their stochastic version have been widely used in machine learning and signal processing. Their nonsmooth counterparts, modified Gauss-Newton or prox-linear algorithms, can lead to contrasting outcomes when compared to gradie
Externí odkaz:
http://arxiv.org/abs/2305.10634
Autor:
Pillutla, Krishna, Liu, Lang, Thickstun, John, Welleck, Sean, Swayamdipta, Swabha, Zellers, Rowan, Oh, Sewoong, Choi, Yejin, Harchaoui, Zaid
Generative artificial intelligence has made significant strides, producing text indistinguishable from human prose and remarkably photorealistic images. Automatically measuring how close the generated data distribution is to the target distribution i
Externí odkaz:
http://arxiv.org/abs/2212.14578