Zobrazeno 1 - 10
of 8 838
pro vyhledávání: '"Benjamin, I"'
With the growing integration of AI in daily life, ensuring the robustness of systems to inference-time attacks is crucial. Among the approaches for certifying robustness to such adversarial examples, randomized smoothing has emerged as highly promisi
Externí odkaz:
http://arxiv.org/abs/2408.00728
Reuse of data in adaptive workflows poses challenges regarding overfitting and the statistical validity of results. Previous work has demonstrated that interacting with data via differentially private algorithms can mitigate overfitting, achieving wo
Externí odkaz:
http://arxiv.org/abs/2405.13375
Modern NLP models are often trained on public datasets drawn from diverse sources, rendering them vulnerable to data poisoning attacks. These attacks can manipulate the model's behavior in ways engineered by the attacker. One such tactic involves the
Externí odkaz:
http://arxiv.org/abs/2405.11575
Randomized smoothing has shown promising certified robustness against adversaries in classification tasks. Despite such success with only zeroth-order access to base models, randomized smoothing has not been extended to a general form of regression.
Externí odkaz:
http://arxiv.org/abs/2405.08892
Autor:
He, Xuanli, Wang, Jun, Xu, Qiongkai, Minervini, Pasquale, Stenetorp, Pontus, Rubinstein, Benjamin I. P., Cohn, Trevor
The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malici
Externí odkaz:
http://arxiv.org/abs/2404.19597
While multilingual machine translation (MNMT) systems hold substantial promise, they also have security vulnerabilities. Our research highlights that MNMT systems can be susceptible to a particularly devious style of backdoor attack, whereby an attac
Externí odkaz:
http://arxiv.org/abs/2404.02393
Autor:
Jin, Jiankai, Chuengsatiansup, Chitchanok, Murray, Toby, Rubinstein, Benjamin I. P., Yarom, Yuval, Ohrimenko, Olga
Current implementations of differentially-private (DP) systems either lack support to track the global privacy budget consumed on a dataset, or fail to faithfully maintain the state continuity of this budget. We show that failure to maintain a privac
Externí odkaz:
http://arxiv.org/abs/2401.17628
Autor:
Andrias, Kate, Sachs, Benjamin I.
Publikováno v:
Columbia Law Review, 2024 Apr 01. 124(3), 777-850.
Externí odkaz:
https://www.jstor.org/stable/27303657