Zobrazeno 1 - 10
of 44
pro vyhledávání: '"Fowl, Liam"'
Autor:
Souri, Hossein, Bansal, Arpit, Kazemi, Hamid, Fowl, Liam, Saha, Aniruddha, Geiping, Jonas, Wilson, Andrew Gordon, Chellappa, Rama, Goldstein, Tom, Goldblum, Micah
Modern neural networks are often trained on massive datasets that are web scraped with minimal human inspection. As a result of this insecure curation pipeline, an adversary can poison or backdoor the resulting model by uploading malicious data to th
Externí odkaz:
http://arxiv.org/abs/2403.16365
Autor:
Labrador, Beltrán, Zhao, Guanlong, Moreno, Ignacio López, Scarpati, Angelo Scorza, Fowl, Liam, Wang, Quan
In this paper, we present a novel approach to adapt a sequence-to-sequence Transformer-Transducer ASR system to the keyword spotting (KWS) task. We achieve this by replacing the keyword in the text transcription with a special token and training
Externí odkaz:
http://arxiv.org/abs/2211.06478
Autor:
Wen, Yuxin, Geiping, Jonas, Fowl, Liam, Souri, Hossein, Chellappa, Rama, Goldblum, Micah, Goldstein, Tom
Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates. At the same time, the attack power of an individual user is limited because
Externí odkaz:
http://arxiv.org/abs/2210.09305
Autor:
Sandoval-Segura, Pedro, Singla, Vasu, Fowl, Liam, Geiping, Jonas, Goldblum, Micah, Jacobs, David, Goldstein, Tom
Imperceptible poisoning attacks on entire datasets have recently been touted as methods for protecting data privacy. However, among a number of defenses preventing the practical use of these techniques, early-stopping stands out as a simple, yet effe
Externí odkaz:
http://arxiv.org/abs/2204.08615
Autor:
Somepalli, Gowthami, Fowl, Liam, Bansal, Arpit, Yeh-Chiang, Ping, Dar, Yehuda, Baraniuk, Richard, Goldblum, Micah, Goldstein, Tom
We discuss methods for visualizing neural network decision boundaries and decision regions. We use these visualizations to investigate issues related to reproducibility and generalization in neural network training. We observe that changes in model a
Externí odkaz:
http://arxiv.org/abs/2203.08124
Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency. Previous works have exposed privacy vulnerabilities in the FL pipeline by recovering user data from gradient updates. However, existing attacks fail
Externí odkaz:
http://arxiv.org/abs/2202.00580
Autor:
Fowl, Liam, Geiping, Jonas, Reich, Steven, Wen, Yuxin, Czaja, Wojtek, Goldblum, Micah, Goldstein, Tom
A central tenet of Federated learning (FL), which trains models without centralizing user data, is privacy. However, previous work has shown that the gradient updates used in FL can leak user information. While the most industrial uses of FL are for
Externí odkaz:
http://arxiv.org/abs/2201.12675
Data poisoning for reinforcement learning has historically focused on general performance degradation, and targeted attacks have been successful via perturbations that involve control of the victim's policy and rewards. We introduce an insidious pois
Externí odkaz:
http://arxiv.org/abs/2201.00762
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency. Previous works have shown that federated gradient updates contain information that can be used to approximately recover user data in some sit
Externí odkaz:
http://arxiv.org/abs/2110.13057
The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data. In this work, we show that adversarial examples, originally intended for attacking pre-trained models, are
Externí odkaz:
http://arxiv.org/abs/2106.10807