Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Rajabi, Arezoo"'
Autor:
Sahabandu, Dinuka, Xu, Xiaojun, Rajabi, Arezoo, Niu, Luyao, Ramasubramanian, Bhaskar, Li, Bo, Poovendran, Radha
We propose and analyze an adaptive adversary that can retrain a Trojaned DNN and is also aware of SOTA output-based Trojaned model detectors. We show that such an adversary can ensure (1) high accuracy on both trigger-embedded and clean samples and (
Externí odkaz:
http://arxiv.org/abs/2402.08695
Autor:
Rajabi, Arezoo, Pimple, Reeya, Janardhanan, Aiswarya, Asokraj, Surudhi, Ramasubramanian, Bhaskar, Poovendran, Radha
Transfer learning (TL) has been demonstrated to improve DNN model performance when faced with a scarcity of training samples. However, the suitability of TL as a solution to reduce vulnerability of overfitted DNNs to privacy attacks is unexplored. A
Externí odkaz:
http://arxiv.org/abs/2402.01114
Autor:
Rajabi, Arezoo, Asokraj, Surudhi, Jiang, Fengqing, Niu, Luyao, Ramasubramanian, Bhaskar, Ritcey, Jim, Poovendran, Radha
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN suc
Externí odkaz:
http://arxiv.org/abs/2308.15673
The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting. Overfitted models have been shown to be susceptible to query-base
Externí odkaz:
http://arxiv.org/abs/2212.01688
Autor:
Sahabandu, Dinuka, Rajabi, Arezoo, Niu, Luyao, Li, Bo, Ramasubramanian, Bhaskar, Poovendran, Radha
Machine learning models in the wild have been shown to be vulnerable to Trojan attacks during training. Although many detection mechanisms have been proposed, strong adaptive attackers have been shown to be effective against them. In this paper, we a
Externí odkaz:
http://arxiv.org/abs/2207.05937
Machine learning (ML) models that use deep neural networks are vulnerable to backdoor attacks. Such attacks involve the insertion of a (hidden) trigger by an adversary. As a consequence, any input that contains the trigger will cause the neural netwo
Externí odkaz:
http://arxiv.org/abs/2203.15506
Cyber and cyber-physical systems equipped with machine learning algorithms such as autonomous cars share environments with humans. In such a setting, it is important to align system (or agent) behaviors with the preferences of one or more human users
Externí odkaz:
http://arxiv.org/abs/2203.10165
Publikováno v:
In Computational Toxicology June 2024 30
Autor:
Rajabi, Arezoo, Bobba, Rakesh B.
Publikováno v:
DSN Workshop on Dependable and Secure Machine Learning (DSML 2019)
Despite high accuracy of Convolutional Neural Networks (CNNs), they are vulnerable to adversarial and out-distribution examples. There are many proposed methods that tend to detect or make CNNs robust against these fooling examples. However, most suc
Externí odkaz:
http://arxiv.org/abs/2011.09123
We aim at demonstrating the influence of diversity in the ensemble of CNNs on the detection of black-box adversarial instances and hardening the generation of white-box adversarial attacks. To this end, we propose an ensemble of diverse specialized C
Externí odkaz:
http://arxiv.org/abs/2005.08321