Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Co, Kenneth T."'
Deep neural networks have become an integral part of our software infrastructure and are being deployed in many widely-used and safety-critical applications. However, their integration into many systems also brings with it the vulnerability to test t
Externí odkaz:
http://arxiv.org/abs/2204.08726
Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural Networks (DNNs). UAPs generalize across many di
Externí odkaz:
http://arxiv.org/abs/2105.07334
Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on
Externí odkaz:
http://arxiv.org/abs/2104.10459
LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical
Externí odkaz:
http://arxiv.org/abs/2102.03722
Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifical
Externí odkaz:
http://arxiv.org/abs/2012.06024
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise. In this paper we analyze the adversarial robustness of texture and shape-biased models to Universal Adversarial Perturbations (UAPs).
Externí odkaz:
http://arxiv.org/abs/1911.10364
Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets. Standard federated learning techniques are vulnerable to Byzantine failures, biased local datas
Externí odkaz:
http://arxiv.org/abs/1909.05125
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this
Externí odkaz:
http://arxiv.org/abs/1906.03455
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbati
Externí odkaz:
http://arxiv.org/abs/1810.00470
Publikováno v:
Plastics News. 4/20/1998, Vol. 10 Issue 8, p22. 6p. 1 Chart.