Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Lucas, Keane"'
Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Spec
Externí odkaz:
http://arxiv.org/abs/2306.16614
It is becoming increasingly imperative to design robust ML defenses. However, recent work has found that many defenses that initially resist state-of-the-art attacks can be broken by an adaptive adversary. In this work we take steps to simplify the d
Externí odkaz:
http://arxiv.org/abs/2302.13464
Autor:
Huang, Zhuoqun, Marchant, Neil G., Lucas, Keane, Bauer, Lujo, Ohrimenko, Olga, Rubinstein, Benjamin I. P.
Randomized smoothing is a leading approach for constructing classifiers that are certifiably robust against adversarial examples. Existing work on randomized smoothing has focused on classifiers with continuous inputs, such as images, where $\ell_p$-
Externí odkaz:
http://arxiv.org/abs/2302.01757
Autor:
Lucas, Keane, Allen, Ross E.
Cooperative artificial intelligence with human or superhuman proficiency in collaborative tasks stands at the frontier of machine learning research. Prior work has tended to evaluate cooperative AI performance under the restrictive paradigms of self-
Externí odkaz:
http://arxiv.org/abs/2201.12436
We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks better align with the attacker's goal: (1) tricking a model to assign higher probability to the target class than to any other class, while (2) stayin
Externí odkaz:
http://arxiv.org/abs/2112.14232
Motivated by the transformative impact of deep neural networks (DNNs) in various domains, researchers and anti-virus vendors have proposed DNNs for malware detection from raw bytes that do not require manual feature engineering. In this work, we prop
Externí odkaz:
http://arxiv.org/abs/1912.09064
Autor:
Huang, Zhuoqun, Marchant, Neil G., Lucas, Keane, Bauer, Lujo, Ohrimenko, Olga, Rubinstein, Benjamin I. P.
Certified defenses are a recent development in adversarial machine learning (ML), which aim to rigorously guarantee the robustness of ML models to adversarial perturbations. A large body of work studies certified defenses in computer vision, where $\
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7e6194b1002e264840ec2f8048ba2e6a
Autor:
Ritwik Gupta, Goodman, Bryce, Nirav Patel, Hosfelt, Richard, Sajeev, Sandra, Heim, Eric, Jigar Doshi, Lucas, Keane, Choset, Howard, Gaston, Matthew
We present a preliminary report for xBD, a new large-scale dataset for the advancement of change detection and building damage assessment for humanitarian assistance and disaster recovery research.Logistics, resource planning, and damage estimation a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::120ba986f5acbcc0f541cf7006375b65