Zobrazeno 1 - 10
of 194
pro vyhledávání: '"adversarial patch"'
Publikováno v:
Visual Intelligence, Vol 2, Iss 1, Pp 1-10 (2024)
Abstract Visual language pre-training (VLP) models have demonstrated significant success in various domains, but they remain vulnerable to adversarial attacks. Addressing these adversarial vulnerabilities is crucial for enhancing security in multi-mo
Externí odkaz:
https://doaj.org/article/c87c92f554ec49888574afaf3ebf4399
Publikováno v:
International Journal of Computational Intelligence Systems, Vol 17, Iss 1, Pp 1-16 (2024)
Abstract Adversarial patches, a type of adversarial example, pose serious security threats to deep neural networks (DNNs) by inducing erroneous outputs. Existing gradient stabilization methods aim to stabilize the optimization direction of adversaria
Externí odkaz:
https://doaj.org/article/19206d3a309a44ea8e8093627c459d08
Publikováno v:
Journal of King Saud University: Computer and Information Sciences, Vol 36, Iss 6, Pp 102122- (2024)
Researching infrared adversarial attacks is crucial for ensuring the safe deployment of security-sensitive systems reliant on infrared object detectors. However, current research on infrared adversarial attacks mainly focuses on pedestrian detection
Externí odkaz:
https://doaj.org/article/6c2a1a4efc9742ebb89c1726a81450e6
Publikováno v:
网络与信息安全学报, Vol 10, Pp 169-180 (2024)
The application of deep neural networks in target detection has been widely adopted in various fields.However, the introduction of adversarial patch attacks, which add local perturbations to images to mislead deep neural networks, poses a significant
Externí odkaz:
https://doaj.org/article/eb8e766733174ed289d27cef0b0ca9ac
Publikováno v:
IEEE Access, Vol 12, Pp 126729-126737 (2024)
Recent studies have shown that machine-learning models are vulnerable to adversarial attacks. Adversarial attacks are deliberate attempts to modify the input data of a machine learning model in a way that causes it to produce incorrect predictions. O
Externí odkaz:
https://doaj.org/article/f71f28bc91c74525a11cd6a6be74cd3f
Publikováno v:
IEEE Access, Vol 12, Pp 13571-13585 (2024)
Monocular depth estimation (MDE) is an important task in scene understanding, and significant improvements in its performance have been witnessed with the utilization of convolutional neural networks (CNNs). These models can now be deployed on edge d
Externí odkaz:
https://doaj.org/article/bf05ccd4a5264303b2e65d9a323b7d78
Publikováno v:
Sensors, Vol 24, Iss 19, p 6461 (2024)
Object detection systems are used in various fields such as autonomous vehicles and facial recognition. In particular, object detection using deep learning networks enables real-time processing in low-performance edge devices and can maintain high de
Externí odkaz:
https://doaj.org/article/859ff30f8df646cf9ab112acb437c4ee
Publikováno v:
网络与信息安全学报, Vol 9, Pp 16-27 (2023)
Recent studies have revealed that deep neural networks (DNN) used in artificial intelligence systems are highly vulnerable to adversarial sample-based attacks.To address this issue, a dual adversarial attack method was proposed for license plate reco
Externí odkaz:
https://doaj.org/article/83831ba156484963aa3a4ef608fe06e2
Publikováno v:
Tehnički Vjesnik, Vol 30, Iss 6, Pp 1888-1898 (2023)
Deep neural networks (DNNs) are susceptible to adversarial attacks, including the recently introduced locally visible adversarial patch attack, which achieves a success rate exceeding 96%. These attacks pose significant challenges to DNN security. Va
Externí odkaz:
https://doaj.org/article/a9fd199bafcb4b2d99cd12492df150a6
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.