Zobrazeno 1 - 10
of 2 383
pro vyhledávání: '"Thakurta A"'
Autor:
Choquette-Choo, Christopher A., Ganesh, Arun, Haque, Saminul, Steinke, Thomas, Thakurta, Abhradeep
We study the problem of computing the privacy parameters for DP machine learning when using privacy amplification via random batching and noise correlated across rounds via a correlation matrix $\textbf{C}$ (i.e., the matrix mechanism). Past work on
Externí odkaz:
http://arxiv.org/abs/2410.06266
Autor:
Steinke, Thomas, Nasr, Milad, Ganesh, Arun, Balle, Borja, Choquette-Choo, Christopher A., Jagielski, Matthew, Hayes, Jamie, Thakurta, Abhradeep Guha, Smith, Adam, Terzis, Andreas
We propose a simple heuristic privacy analysis of noisy clipped stochastic gradient descent (DP-SGD) in the setting where only the last iterate is released and the intermediate iterates remain hidden. Namely, our heuristic assumes a linear structure
Externí odkaz:
http://arxiv.org/abs/2410.06186
Large ASR models can inadvertently leak sensitive information, which can be mitigated by formal privacy measures like differential privacy (DP). However, traditional DP training is computationally expensive, and can hurt model performance. Our study
Externí odkaz:
http://arxiv.org/abs/2410.01948
Self-supervised learning (SSL) methods for large speech models have proven to be highly effective at ASR. With the interest in public deployment of large pre-trained models, there is a rising concern for unintended memorization and leakage of sensiti
Externí odkaz:
http://arxiv.org/abs/2409.13953
Autor:
Zhou, Guanglei, Korrapati, Bhargav, Reddy, Gaurav Rajavendra, Hu, Jiang, Chen, Yiran, Thakurta, Dipto G.
Generation of diverse VLSI layout patterns is crucial for various downstream tasks in design for manufacturing (DFM) studies. However, the lengthy design cycles often hinder the creation of a comprehensive layout pattern library, and new detrimental
Externí odkaz:
http://arxiv.org/abs/2409.01348
In this paper we revisit the DP stochastic convex optimization (SCO) problem. For convex smooth losses, it is well-known that the canonical DP-SGD (stochastic gradient descent) achieves the optimal rate of $O\left(\frac{LR}{\sqrt{n}} + \frac{LR \sqrt
Externí odkaz:
http://arxiv.org/abs/2406.02716
Autor:
Dvijotham, Krishnamurthy, McMahan, H. Brendan, Pillutla, Krishna, Steinke, Thomas, Thakurta, Abhradeep
In the task of differentially private (DP) continual counting, we receive a stream of increments and our goal is to output an approximate running total of these increments, without revealing too much about any specific increment. Despite its simplici
Externí odkaz:
http://arxiv.org/abs/2404.16706
Autor:
Brown, Gavin, Dvijotham, Krishnamurthy, Evans, Georgina, Liu, Daogao, Smith, Adam, Thakurta, Abhradeep
We provide an improved analysis of standard differentially private gradient descent for linear regression under the squared error loss. Under modest assumptions on the input, we characterize the distribution of the iterate at each time step. Our anal
Externí odkaz:
http://arxiv.org/abs/2402.13531
We study the task of $(\epsilon, \delta)$-differentially private online convex optimization (OCO). In the online setting, the release of each distinct decision or iterate carries with it the potential for privacy loss. This problem has a long history
Externí odkaz:
http://arxiv.org/abs/2312.11534
Privacy amplification exploits randomness in data selection to provide tighter differential privacy (DP) guarantees. This analysis is key to DP-SGD's success in machine learning, but, is not readily applicable to the newer state-of-the-art algorithms
Externí odkaz:
http://arxiv.org/abs/2310.15526