Zobrazeno 1 - 10
of 202
pro vyhledávání: '"Steinke, Thomas"'
Autor:
Choquette-Choo, Christopher A., Ganesh, Arun, Haque, Saminul, Steinke, Thomas, Thakurta, Abhradeep
We study the problem of computing the privacy parameters for DP machine learning when using privacy amplification via random batching and noise correlated across rounds via a correlation matrix $\textbf{C}$ (i.e., the matrix mechanism). Past work on
Externí odkaz:
http://arxiv.org/abs/2410.06266
Autor:
Steinke, Thomas, Nasr, Milad, Ganesh, Arun, Balle, Borja, Choquette-Choo, Christopher A., Jagielski, Matthew, Hayes, Jamie, Thakurta, Abhradeep Guha, Smith, Adam, Terzis, Andreas
We propose a simple heuristic privacy analysis of noisy clipped stochastic gradient descent (DP-SGD) in the setting where only the last iterate is released and the intermediate iterates remain hidden. Namely, our heuristic assumes a linear structure
Externí odkaz:
http://arxiv.org/abs/2410.06186
In this paper, we study differentially private (DP) algorithms for computing the geometric median (GM) of a dataset: Given $n$ points, $x_1,\dots,x_n$ in $\mathbb{R}^d$, the goal is to find a point $\theta$ that minimizes the sum of the Euclidean dis
Externí odkaz:
http://arxiv.org/abs/2406.07407
We consider the problem of computing tight privacy guarantees for the composition of subsampled differentially private mechanisms. Recent algorithms can numerically compute the privacy parameters to arbitrary precision but must be carefully applied.
Externí odkaz:
http://arxiv.org/abs/2405.20769
Autor:
Dvijotham, Krishnamurthy, McMahan, H. Brendan, Pillutla, Krishna, Steinke, Thomas, Thakurta, Abhradeep
In the task of differentially private (DP) continual counting, we receive a stream of increments and our goal is to output an approximate running total of these increments, without revealing too much about any specific increment. Despite its simplici
Externí odkaz:
http://arxiv.org/abs/2404.16706
Autor:
Carlini, Nicholas, Paleka, Daniel, Dvijotham, Krishnamurthy Dj, Steinke, Thomas, Hayase, Jonathan, Cooper, A. Feder, Lee, Katherine, Jagielski, Matthew, Nasr, Milad, Conmy, Arthur, Yona, Itay, Wallace, Eric, Rolnick, David, Tramèr, Florian
We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI's ChatGPT or Google's PaLM-2. Specifically, our attack recovers the embedding projection layer (up to sym
Externí odkaz:
http://arxiv.org/abs/2403.06634
The locality of solution features in cardiac electrophysiology simulations calls for adaptive methods. Due to the overhead incurred by established mesh refinement and coarsening, however, such approaches failed in accelerating the computations. Here
Externí odkaz:
http://arxiv.org/abs/2311.07206
Privacy amplification exploits randomness in data selection to provide tighter differential privacy (DP) guarantees. This analysis is key to DP-SGD's success in machine learning, but, is not readily applicable to the newer state-of-the-art algorithms
Externí odkaz:
http://arxiv.org/abs/2310.15526
Autor:
Choquette-Choo, Christopher A., Dvijotham, Krishnamurthy, Pillutla, Krishna, Ganesh, Arun, Steinke, Thomas, Thakurta, Abhradeep
Publikováno v:
ICLR 2024
Differentially private learning algorithms inject noise into the learning process. While the most common private learning algorithm, DP-SGD, adds independent Gaussian noise in each iteration, recent work on matrix factorization mechanisms has shown e
Externí odkaz:
http://arxiv.org/abs/2310.06771
Autor:
Knop, Alexander, Steinke, Thomas
We study the problem of counting the number of distinct elements in a dataset subject to the constraint of differential privacy. We consider the challenging setting of person-level DP (a.k.a. user-level DP) where each person may contribute an unbound
Externí odkaz:
http://arxiv.org/abs/2308.12947