Zobrazeno 1 - 10
of 56
pro vyhledávání: '"Ramasubramanian, Bhaskar"'
Autor:
Niu, Luyao, Zhang, Hongchao, Sahabandu, Dinuka, Ramasubramanian, Bhaskar, Clark, Andrew, Poovendran, Radha
Multi-agent cyber-physical systems are present in a variety of applications. Agent decision-making can be affected due to errors induced by uncertain, dynamic operating environments or due to incorrect actions taken by an agent. When an erroneous dec
Externí odkaz:
http://arxiv.org/abs/2410.20288
Autor:
Sahabandu, Dinuka, Ramasubramanian, Bhaskar, Alexiou, Michail, Mertoguno, J. Sukarno, Bushnell, Linda, Poovendran, Radha
This paper introduces a novel reinforcement learning (RL) strategy designed to facilitate rapid autonomy transfer by utilizing pre-trained critic value functions from multiple environments. Unlike traditional methods that require extensive retraining
Externí odkaz:
http://arxiv.org/abs/2407.20466
Autor:
Li, Yuetai, Xu, Zhangchen, Jiang, Fengqing, Niu, Luyao, Sahabandu, Dinuka, Ramasubramanian, Bhaskar, Poovendran, Radha
The remarkable performance of large language models (LLMs) in generation tasks has enabled practitioners to leverage publicly available models to power custom applications, such as chatbots and virtual assistants. However, the data used to train or f
Externí odkaz:
http://arxiv.org/abs/2406.12257
Autor:
Jiang, Fengqing, Xu, Zhangchen, Niu, Luyao, Xiang, Zhen, Ramasubramanian, Bhaskar, Li, Bo, Poovendran, Radha
Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently known techniques presume that corpora used for saf
Externí odkaz:
http://arxiv.org/abs/2402.11753
Autor:
Sahabandu, Dinuka, Xu, Xiaojun, Rajabi, Arezoo, Niu, Luyao, Ramasubramanian, Bhaskar, Li, Bo, Poovendran, Radha
We propose and analyze an adaptive adversary that can retrain a Trojaned DNN and is also aware of SOTA output-based Trojaned model detectors. We show that such an adversary can ensure (1) high accuracy on both trigger-embedded and clean samples and (
Externí odkaz:
http://arxiv.org/abs/2402.08695
Autor:
Rajabi, Arezoo, Pimple, Reeya, Janardhanan, Aiswarya, Asokraj, Surudhi, Ramasubramanian, Bhaskar, Poovendran, Radha
Transfer learning (TL) has been demonstrated to improve DNN model performance when faced with a scarcity of training samples. However, the suitability of TL as a solution to reduce vulnerability of overfitted DNNs to privacy attacks is unexplored. A
Externí odkaz:
http://arxiv.org/abs/2402.01114
Autor:
Xiang, Zhen, Jiang, Fengqing, Xiong, Zidi, Ramasubramanian, Bhaskar, Poovendran, Radha, Li, Bo
Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting, particularly when tackling tasks that require systematic reasoning processes. On the other hand, COT prompting also poses new vulnerabilities in the form of back
Externí odkaz:
http://arxiv.org/abs/2401.12242
Autor:
Rajabi, Arezoo, Asokraj, Surudhi, Jiang, Fengqing, Niu, Luyao, Ramasubramanian, Bhaskar, Ritcey, Jim, Poovendran, Radha
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN suc
Externí odkaz:
http://arxiv.org/abs/2308.15673
Autonomous cyber and cyber-physical systems need to perform decision-making, learning, and control in unknown environments. Such decision-making can be sensitive to multiple factors, including modeling errors, changes in costs, and impacts of events
Externí odkaz:
http://arxiv.org/abs/2304.02005
The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting. Overfitted models have been shown to be susceptible to query-base
Externí odkaz:
http://arxiv.org/abs/2212.01688