Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Cherubin, Giovanni"'
Autor:
Debenedetti, Edoardo, Rando, Javier, Paleka, Daniel, Florin, Silaghi Fineas, Albastroiu, Dragos, Cohen, Niv, Lemberg, Yuval, Ghosh, Reshmi, Wen, Rui, Salem, Ahmed, Cherubin, Giovanni, Zanella-Beguelin, Santiago, Schmid, Robin, Klemm, Victor, Miki, Takahiro, Li, Chenhao, Kraft, Stefan, Fritz, Mario, Tramèr, Florian, Abdelnabi, Sahar, Schönherr, Lea
Large language model systems face important security risks from maliciously crafted messages that aim to overwrite the system's original instructions or leak private data. To study this problem, we organized a capture-the-flag competition at IEEE SaT
Externí odkaz:
http://arxiv.org/abs/2406.07954
Autor:
Abdelnabi, Sahar, Fay, Aideen, Cherubin, Giovanni, Salem, Ahmed, Fritz, Mario, Paverd, Andrew
Large Language Models are commonly used in retrieval-augmented applications to execute user instructions based on data from external sources. For example, modern search engines use LLMs to answer queries based on relevant search results; email plugin
Externí odkaz:
http://arxiv.org/abs/2406.00799
Autor:
Cherubin, Giovanni, Köpf, Boris, Paverd, Andrew, Tople, Shruti, Wutschitz, Lukas, Zanella-Béguelin, Santiago
Machine learning models trained with differentially-private (DP) algorithms such as DP-SGD enjoy resilience against a wide range of privacy attacks. Although it is possible to derive bounds for some attacks based solely on an $(\varepsilon,\delta)$-D
Externí odkaz:
http://arxiv.org/abs/2402.14397
Autor:
Salem, Ahmed, Cherubin, Giovanni, Evans, David, Köpf, Boris, Paverd, Andrew, Suri, Anshuman, Tople, Shruti, Zanella-Béguelin, Santiago
Deploying machine learning models in production may allow adversaries to infer sensitive information about training data. There is a vast literature analyzing different types of inference risks, ranging from membership inference to reconstruction att
Externí odkaz:
http://arxiv.org/abs/2212.10986
Autor:
Jordon, James, Szpruch, Lukasz, Houssiau, Florimond, Bottarelli, Mirko, Cherubin, Giovanni, Maple, Carsten, Cohen, Samuel N., Weller, Adrian
This explainer document aims to provide an overview of the current state of the rapidly expanding work on synthetic data technologies, with a particular focus on privacy. The article is intended for a non-technical audience, though some formal defini
Externí odkaz:
http://arxiv.org/abs/2205.03257
Conformal prediction (CP) is a wrapper around traditional machine learning models, giving coverage guarantees under the sole assumption of exchangeability; in classification problems, for a chosen significance level $\varepsilon$, CP guarantees that
Externí odkaz:
http://arxiv.org/abs/2202.01315
Given access to a machine learning model, can an adversary reconstruct the model's training data? This work studies this question from the lens of a powerful informed adversary who knows all the training data points except one. By instantiating concr
Externí odkaz:
http://arxiv.org/abs/2201.04845
Conformal Predictors (CP) are wrappers around ML models, providing error guarantees under weak assumptions on the data distribution. They are suitable for a wide range of problems, from classification and regression to anomaly detection. Unfortunatel
Externí odkaz:
http://arxiv.org/abs/2102.03236
Security system designers favor worst-case security metrics, such as those derived from differential privacy (DP), due to the strong guarantees they provide. On the downside, these guarantees result in a high penalty on the system's performance. In t
Externí odkaz:
http://arxiv.org/abs/2011.03396
A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model's training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate
Externí odkaz:
http://arxiv.org/abs/1906.00389