Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Erlingsson, Ulfar"'
Autor:
Carlini, Nicholas, Tramer, Florian, Wallace, Eric, Jagielski, Matthew, Herbert-Voss, Ariel, Lee, Katherine, Roberts, Adam, Brown, Tom, Song, Dawn, Erlingsson, Ulfar, Oprea, Alina, Raffel, Colin
It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual trai
Externí odkaz:
http://arxiv.org/abs/2012.07805
Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer privacy for training data. In practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training w
Externí odkaz:
http://arxiv.org/abs/2007.14191
Autor:
Erlingsson, Úlfar, Feldman, Vitaly, Mironov, Ilya, Raghunathan, Ananth, Song, Shuang, Talwar, Kunal, Thakurta, Abhradeep
Recently, a number of approaches and techniques have been introduced for reporting software statistics with strong privacy guarantees. These range from abstract algorithms to comprehensive systems with varying assumptions and built upon local differe
Externí odkaz:
http://arxiv.org/abs/2001.03618
We develop techniques to quantify the degree to which a given (training or testing) example is an outlier in the underlying distribution. We evaluate five methods to score examples in a dataset by how well-represented the examples are, for different
Externí odkaz:
http://arxiv.org/abs/1910.13427
The guarantees of security and privacy defenses are often strengthened by relaxing the assumptions made about attackers or the context in which defenses are deployed. Such relaxations can be a highly worthwhile topic of exploration---even though they
Externí odkaz:
http://arxiv.org/abs/1908.03566
Autor:
McMahan, H. Brendan, Andrew, Galen, Erlingsson, Ulfar, Chien, Steve, Mironov, Ilya, Papernot, Nicolas, Kairouz, Peter
In this work we address the practical challenges of training machine learning models on privacy-sensitive datasets by introducing a modular approach that minimizes changes to training algorithms, provides a variety of configuration strategies for the
Externí odkaz:
http://arxiv.org/abs/1812.06210
Autor:
Erlingsson, Úlfar, Feldman, Vitaly, Mironov, Ilya, Raghunathan, Ananth, Talwar, Kunal, Thakurta, Abhradeep
Sensitive statistics are often collected across sets of users, with repeated collection of reports done over time. For example, trends in users' private preferences or software usage may be monitored via such reports. We study the collection of such
Externí odkaz:
http://arxiv.org/abs/1811.12469
Autor:
Papernot, Nicolas, Song, Shuang, Mironov, Ilya, Raghunathan, Ananth, Talwar, Kunal, Erlingsson, Úlfar
The rapid adoption of machine learning has increased concerns about the privacy implications of machine learning models trained on sensitive data, such as medical records or other personal information. To address those concerns, one promising approac
Externí odkaz:
http://arxiv.org/abs/1802.08908
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model. Because such models ar
Externí odkaz:
http://arxiv.org/abs/1802.08232
Autor:
Bittau, Andrea, Erlingsson, Úlfar, Maniatis, Petros, Mironov, Ilya, Raghunathan, Ananth, Lie, David, Rudominer, Mitch, Kode, Usharsee, Tinnes, Julien, Seefeld, Bernhard
Publikováno v:
Proceedings of the 26th Symposium on Operating Systems Principles (SOSP), pp. 441-459, 2017
The large-scale monitoring of computer users' software activities has become commonplace, e.g., for application telemetry, error reporting, or demographic profiling. This paper describes a principled systems architecture---Encode, Shuffle, Analyze (E
Externí odkaz:
http://arxiv.org/abs/1710.00901