Zobrazeno 1 - 10
of 315
pro vyhledávání: '"physical adversarial attacks"'
As Face Recognition (FR) technology becomes increasingly prevalent in finance, the military, public safety, and everyday life, security concerns have grown substantially. Physical adversarial attacks targeting FR systems in real-world settings have a
Externí odkaz:
http://arxiv.org/abs/2410.16317
Autor:
Guesmi, Amira, Shafique, Muhammad
Autonomous vehicles (AVs) rely heavily on LiDAR (Light Detection and Ranging) systems for accurate perception and navigation, providing high-resolution 3D environmental data that is crucial for object detection and classification. However, LiDAR syst
Externí odkaz:
http://arxiv.org/abs/2409.20426
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks, achieved through physical objects that can corrupt their predictions, raises serious security concerns for their a
Externí odkaz:
http://arxiv.org/abs/2311.11191
In this paper, we present a comprehensive survey of the current trends focusing specifically on physical adversarial attacks. We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristi
Externí odkaz:
http://arxiv.org/abs/2308.06173
Modern automated surveillance techniques are heavily reliant on deep learning methods. Despite the superior performance, these learning systems are inherently vulnerable to adversarial attacks - maliciously crafted inputs that are designed to mislead
Externí odkaz:
http://arxiv.org/abs/2305.01074
Adversarial attacks can mislead deep learning models to make false predictions by implanting small perturbations to the original input that are imperceptible to the human eye, which poses a huge security threat to the computer vision systems based on
Externí odkaz:
http://arxiv.org/abs/2303.12249
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Woitschek, Fabian, Schneider, Georg
Publikováno v:
2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 2021, pp. 481-487
Deep Neural Networks (DNNs) are increasingly applied in the real world in safety critical applications like advanced driver assistance systems. An example for such use case is represented by traffic sign recognition systems. At the same time, it is k
Externí odkaz:
http://arxiv.org/abs/2302.13570
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
IEEE Access, Vol 11, Pp 109617-109668 (2023)
Deep Neural Networks (DNNs) have shown impressive performance in computer vision tasks; however, their vulnerability to adversarial attacks raises concerns regarding their security and reliability. Extensive research has shown that DNNs can be compro
Externí odkaz:
https://doaj.org/article/3b15879245f2447b828b1865139c5856