Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Kenneth T. Co"'
Publikováno v:
Artificial Neural Networks and Machine Learning-ICANN 2022
Lecture Notes in Computer Science ISBN: 9783031159336
Lecture Notes in Computer Science ISBN: 9783031159336
Deep neural networks have become an integral part of our software infrastructure and are being deployed in many widely-used and safety-critical applications. However, their integration into many systems also brings with it the vulnerability to test t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::069f3f2a91318623018a03d24d0faf1a
http://hdl.handle.net/10044/1/99620
http://hdl.handle.net/10044/1/99620
Publikováno v:
Lecture Notes in Computer Science ISBN: 9783030863791
ICANN (4)
ICANN (4)
Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::9bc98f4f7b321f99543020f9387f3099
https://doi.org/10.1007/978-3-030-86380-7_17
https://doi.org/10.1007/978-3-030-86380-7_17
Publikováno v:
NDSS 2021 Workshop
LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2c26166ce1131278ecf7f345581ac693
Publikováno v:
26th ACM Conference on Computer and Communications Security
CCS
CCS
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbati
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::623f1805334de829974a98fcac2124d6
http://hdl.handle.net/10044/1/71700
http://hdl.handle.net/10044/1/71700
Publikováno v:
IEEE International Conference on Image Processing (ICIP)
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise. In this paper we analyze the adversarial robustness of texture and shape-biased models to Universal Adversarial Perturbations (UAPs).
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::0f56824b01ac418d469ea13216b07423