Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Wang, Chenan"'
Deep Neural Network (DNN) models when implemented on executing devices as the inference engines are susceptible to Fault Injection Attacks (FIAs) that manipulate model parameters to disrupt inference execution with disastrous performance. This work i
Externí odkaz:
http://arxiv.org/abs/2401.16766
This paper introduces an attacking mechanism to challenge the resilience of autonomous driving systems. Specifically, we manipulate the decision-making processes of an autonomous vehicle by dynamically displaying adversarial patches on a screen mount
Externí odkaz:
http://arxiv.org/abs/2312.06701
Autor:
Zhao, Zhengyue, Duan, Jinhao, Xu, Kaidi, Wang, Chenan, Zhang, Rui, Du, Zidong, Guo, Qi, Hu, Xing
Stable Diffusion has established itself as a foundation model in generative AI artistic applications, receiving widespread research and application. Some recent fine-tuning methods have made it feasible for individuals to implant personalized concept
Externí odkaz:
http://arxiv.org/abs/2312.00084
Traditional adversarial attacks concentrate on manipulating clean examples in the pixel space by adding adversarial perturbations. By contrast, semantic adversarial attacks focus on changing semantic attributes of clean examples, such as color, conte
Externí odkaz:
http://arxiv.org/abs/2309.07398
Autor:
Duan, Jinhao, Cheng, Hao, Wang, Shiqi, Zavalny, Alex, Wang, Chenan, Xu, Renjing, Kailkhura, Bhavya, Xu, Kaidi
Large Language Models (LLMs) show promising results in language generation and instruction following but frequently "hallucinate", making their outputs less reliable. Despite Uncertainty Quantification's (UQ) potential solutions, implementing it accu
Externí odkaz:
http://arxiv.org/abs/2307.01379
Autor:
Zhao, Zhengyue, Duan, Jinhao, Hu, Xing, Xu, Kaidi, Wang, Chenan, Zhang, Rui, Du, Zidong, Guo, Qi, Chen, Yunji
Diffusion models have demonstrated remarkable performance in image generation tasks, paving the way for powerful AIGC applications. However, these widely-used generative models can also raise security and privacy concerns, such as copyright infringem
Externí odkaz:
http://arxiv.org/abs/2306.01902
To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of
Externí odkaz:
http://arxiv.org/abs/2104.10586
Publikováno v:
In Neurocomputing 13 November 2021 464:265-272
Autor:
Duan, Jinhao, Cheng, Hao, Wang, Shiqi, Wang, Chenan, Zavalny, Alex, Xu, Renjing, Kailkhura, Bhavya, Xu, Kaidi
Although Large Language Models (LLMs) have shown great potential in Natural Language Generation, it is still challenging to characterize the uncertainty of model generations, i.e., when users could trust model outputs. Our research is derived from th
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e9189889e1821c0e9d593d3b7dfd3cbb
http://arxiv.org/abs/2307.01379
http://arxiv.org/abs/2307.01379
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.