Zobrazeno 1 - 10
of 149
pro vyhledávání: '"white-box attack"'
Publikováno v:
工程科学学报, Vol 46, Iss 9, Pp 1630-1637 (2024)
Deep neural network-based video classification models enjoy widespread use because of their superior performance on visual tasks. However, with its broad-based application comes a deep-rooted concern about its security aspect. Recent research signals
Externí odkaz:
https://doaj.org/article/870860a303d249c29d70568f78b5c89c
Publikováno v:
Dianxin kexue, Vol 40, Pp 64-74 (2024)
There are many ways to generate adversarial samples for synthetic aperture radar (SAR) images at present, but some problems such as large amount of perturbation of adversarial samples, unstable training, and unguaranteed quality of adversarial sample
Externí odkaz:
https://doaj.org/article/845e0c2591314a35a20a65e2df89213a
Publikováno v:
AI Open, Vol 5, Iss , Pp 126-141 (2024)
Deep reinforcement learning (DRL) has been shown to have numerous potential applications in the real world. However, DRL algorithms are still extremely sensitive to noise and adversarial perturbations, hence inhibiting the deployment of RL in many re
Externí odkaz:
https://doaj.org/article/a5b9a8b0e2644b76aecd9d6a6148bb08
Publikováno v:
Visual Computing for Industry, Biomedicine, and Art, Vol 6, Iss 1, Pp 1-11 (2023)
Abstract Deep neural networks are vulnerable to attacks from adversarial inputs. Corresponding attack research on human pose estimation (HPE), particularly for body joint detection, has been largely unexplored. Transferring classification-based attac
Externí odkaz:
https://doaj.org/article/81f4f10b1c8c476da8539bba0a7dfb6a
Publikováno v:
Entropy, Vol 26, Iss 11, p 903 (2024)
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consi
Externí odkaz:
https://doaj.org/article/a33bf8f092c84dfbab9585d593211280
Publikováno v:
In AI Open 2024 5:126-141
Publikováno v:
In Journal of Information Security and Applications September 2023 77
Publikováno v:
Jisuanji kexue, Vol 50, Iss 4, Pp 88-95 (2023)
Although deep neural networks(DNNs) have good performance in most classification tasks,they are vulnerable to adversarial examples,making the security of DNNs questionable.Research designs to generate strongly aggressive adversarial examples can help
Externí odkaz:
https://doaj.org/article/3a4cbfbd17684f8d8288ed9c696c239c
Publikováno v:
In Expert Systems With Applications 15 March 2023 214
Publikováno v:
In Computer Vision and Image Understanding January 2023 227