Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Ji, Minzhi"'
Deep neural networks (DNNs) are under threat from adversarial example attacks. The adversary can easily change the outputs of DNNs by adding small well-designed perturbations to inputs. Adversarial example detection is a fundamental work for robust D
Externí odkaz:
http://arxiv.org/abs/2107.09502
Adversarial examples are well-designed input samples, in which perturbations are imperceptible to the human eyes, but easily mislead the output of deep neural networks (DNNs). Existing works synthesize adversarial examples by leveraging simple metric
Externí odkaz:
http://arxiv.org/abs/2010.06855
Publikováno v:
In Information Sciences October 2022 613:717-730
Publikováno v:
In Applied Soft Computing Journal July 2022 124