Zobrazeno 1 - 10
of 82
pro vyhledávání: '"Ohrimenko, Olga"'
With the growing integration of AI in daily life, ensuring the robustness of systems to inference-time attacks is crucial. Among the approaches for certifying robustness to such adversarial examples, randomized smoothing has emerged as highly promisi
Externí odkaz:
http://arxiv.org/abs/2408.00728
We consider the problem of how to verify the security of probabilistic oblivious algorithms formally and systematically. Unfortunately, prior program logics fail to support a number of complexities that feature in the semantics and invariant needed t
Externí odkaz:
http://arxiv.org/abs/2407.00514
Randomized smoothing has shown promising certified robustness against adversaries in classification tasks. Despite such success with only zeroth-order access to base models, randomized smoothing has not been extended to a general form of regression.
Externí odkaz:
http://arxiv.org/abs/2405.08892
Autor:
Jin, Jiankai, Chuengsatiansup, Chitchanok, Murray, Toby, Rubinstein, Benjamin I. P., Yarom, Yuval, Ohrimenko, Olga
Current implementations of differentially-private (DP) systems either lack support to track the global privacy budget consumed on a dataset, or fail to faithfully maintain the state continuity of this budget. We show that failure to maintain a privac
Externí odkaz:
http://arxiv.org/abs/2401.17628
Publikováno v:
Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security (AISec '23), November 30, 2023, Copenhagen, Denmark
In this paper we consider the setting where machine learning models are retrained on updated datasets in order to incorporate the most up-to-date information or reflect distribution shifts. We investigate whether one can infer information about these
Externí odkaz:
http://arxiv.org/abs/2309.11022
Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another. Privacy can be further improved by ensuring that communication between the participants and the serv
Externí odkaz:
http://arxiv.org/abs/2310.05960
Autor:
Cummings, Rachel, Desfontaines, Damien, Evans, David, Geambasu, Roxana, Huang, Yangsibo, Jagielski, Matthew, Kairouz, Peter, Kamath, Gautam, Oh, Sewoong, Ohrimenko, Olga, Papernot, Nicolas, Rogers, Ryan, Shen, Milan, Song, Shuang, Su, Weijie, Terzis, Andreas, Thakurta, Abhradeep, Vassilvitskii, Sergei, Wang, Yu-Xiang, Xiong, Li, Yekhanin, Sergey, Yu, Da, Zhang, Huanyu, Zhang, Wanrong
In this article, we present a detailed review of current practices and state-of-the-art methodologies in the field of differential privacy (DP), with a focus of advancing DP's deployment in real-world applications. Key points and high-level contents
Externí odkaz:
http://arxiv.org/abs/2304.06929
We study the top-$k$ selection problem under the differential privacy model: $m$ items are rated according to votes of a set of clients. We consider a setting in which algorithms can retrieve data via a sequence of accesses, each either a random acce
Externí odkaz:
http://arxiv.org/abs/2301.13347
Autor:
Huang, Zhuoqun, Marchant, Neil G., Lucas, Keane, Bauer, Lujo, Ohrimenko, Olga, Rubinstein, Benjamin I. P.
Randomized smoothing is a leading approach for constructing classifiers that are certifiably robust against adversarial examples. Existing work on randomized smoothing has focused on classifiers with continuous inputs, such as images, where $\ell_p$-
Externí odkaz:
http://arxiv.org/abs/2302.01757
Autor:
Archer, David W., Pigem, Borja de Balle, Bogdanov, Dan, Craddock, Mark, Gascon, Adria, Jansen, Ronald, Jug, Matjaž, Laine, Kim, McLellan, Robert, Ohrimenko, Olga, Raykova, Mariana, Trask, Andrew, Wardley, Simon
This paper describes privacy-preserving approaches for the statistical analysis. It describes motivations for privacy-preserving approaches for the statistical analysis of sensitive data, presents examples of use cases where such methods may apply an
Externí odkaz:
http://arxiv.org/abs/2301.06167