Autor: |
Pavate, Aruna, Bansode, Rajesh |
Zdroj: |
International Journal of Ambient Computing and Intelligence; May 2022, Vol. 13 Issue: 1 p1-18, 18p |
Abstrakt: |
Deep learning is a subspace of intelligence system learning that experienced prominent results in almost all the application domains. However, Deep Neural Network found to be susceptible to perturbed inputs such that the model generates output other than the expected one. By including insignificant perturbation to the input effectuate computer vision models to make an erroneous prediction. Though, it is still a dilemma whether humans are prone to comparable errors. In this paper, we focus on this issue by leveraging the latest practices that help to generate adversarial examples in computer vision applications by considering diverse identified parameters, unidentified parameters, and architectures. The analysis of the distinct techniques has been done by considering different common parameters. Adversarial examples are easily transferable while designing computer vision applications that control the condition of the classifications of labels. The finding highlights that some methods like Zoo and Deepfool achieved 100% success for the nontargeted attack but are application-specific. |
Databáze: |
Supplemental Index |
Externí odkaz: |
|