Zobrazeno 1 - 10
of 200
pro vyhledávání: '"Koskela, Antti"'
Autor:
Koskela, Antti
The hidden state threat model of differential privacy (DP) assumes that the adversary has access only to the final trained machine learning (ML) model, without seeing intermediate states during training. Current privacy analyses under this model, how
Externí odkaz:
http://arxiv.org/abs/2407.04884
Autor:
Koskela, Antti, Mohammadi, Jafar
We present a novel method for accurately auditing the differential privacy (DP) guarantees of DP mechanisms. In particular, our solution is applicable to auditing DP guarantees of machine learning (ML) models. Previous auditing methods tightly captur
Externí odkaz:
http://arxiv.org/abs/2406.04827
Private selection mechanisms (e.g., Report Noisy Max, Sparse Vector) are fundamental primitives of differentially private (DP) data analysis with wide applications to private query release, voting, and hyperparameter tuning. Recent work (Liu and Talw
Externí odkaz:
http://arxiv.org/abs/2402.06701
In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest. Though unrivaled in versatility, DP-SGD requires a non-t
Externí odkaz:
http://arxiv.org/abs/2401.00583
Autor:
Koskela, Antti, Kulkarni, Tejas
Publikováno v:
NeurIPS 2023
Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values. Recently, Papernot and Steinke (2022) proposed a certain
Externí odkaz:
http://arxiv.org/abs/2301.11989
Publikováno v:
International Conference on Learning Representations 2023
Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis. This can be informative as often the individual privacy losses are considerably smaller than those indicated by
Externí odkaz:
http://arxiv.org/abs/2209.15596
Markov chain Monte Carlo (MCMC) algorithms have long been the main workhorses of Bayesian inference. Among them, Hamiltonian Monte Carlo (HMC) has recently become very popular due to its efficiency resulting from effective use of the gradients of the
Externí odkaz:
http://arxiv.org/abs/2106.09376
Shuffle model of differential privacy is a novel distributed privacy model based on a combination of local privacy mechanisms and a secure shuffler. It has been shown that the additional randomisation provided by the shuffler improves privacy bounds
Externí odkaz:
http://arxiv.org/abs/2106.00477
Autor:
Koskela, Antti, Honkela, Antti
The recently proposed Fast Fourier Transform (FFT)-based accountant for evaluating $(\varepsilon,\delta)$-differential privacy guarantees using the privacy loss distribution formalism has been shown to give tighter bounds than commonly used methods s
Externí odkaz:
http://arxiv.org/abs/2102.12412
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets. A large body of prior works that investigate GLMs under differential privacy (DP) cons
Externí odkaz:
http://arxiv.org/abs/2011.00467