Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Liang, Kaisheng"'
Deep neural networks exhibit vulnerability to adversarial examples that can transfer across different models. A particularly challenging problem is developing transferable targeted attacks that can mislead models into predicting specific target class
Externí odkaz:
http://arxiv.org/abs/2411.15553
Unrestricted adversarial attacks present a serious threat to deep learning models and adversarial defense techniques. They pose severe security problems for deep learning applications because they can effectively bypass defense mechanisms. However, p
Externí odkaz:
http://arxiv.org/abs/2307.12499
Autor:
Liang, Kaisheng, Xiao, Bin
Adversarial attacks can mislead deep neural networks (DNNs) by adding imperceptible perturbations to benign examples. The attack transferability enables adversarial examples to attack black-box DNNs with unknown architectures or parameters, which pos
Externí odkaz:
http://arxiv.org/abs/2304.11579