Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Rauber, Jonas"'
EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy. Library developers no longer need to choose between supporting just one of these frameworks or reimplementing the libra
Externí odkaz:
http://arxiv.org/abs/2008.04175
Autor:
Rauber, Jonas, Bethge, Matthias
Rescaling a vector $\vec{\delta} \in \mathbb{R}^n$ to a desired length is a common operation in many areas such as data science and machine learning. When the rescaled perturbation $\eta \vec{\delta}$ is added to a starting point $\vec{x} \in D$ (whe
Externí odkaz:
http://arxiv.org/abs/2007.07677
Publikováno v:
International Journal of Structural Integrity, 2022, Vol. 14, Issue 1, pp. 91-102.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/IJSI-05-2022-0074
The ubiquity of smartphone usage in many people's lives make it a rich source of information about a person's mental and cognitive state. In this work we analyze 12 weeks of phone usage data from 113 older adults, 31 with diagnosed cognitive impairme
Externí odkaz:
http://arxiv.org/abs/1911.05683
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturbations has moved from a peculiar phenomenon to a core issue in Deep Learning. Despite much attention, however, progress towards more robust models is
Externí odkaz:
http://arxiv.org/abs/1907.01003
Modern neural networks are highly non-robust against adversarial manipulation. A significant amount of work has been invested in techniques to compute lower bounds on robustness through formal guarantees and to build provably robust models. However,
Externí odkaz:
http://arxiv.org/abs/1903.11359
Autor:
Carlini, Nicholas, Athalye, Anish, Papernot, Nicolas, Brendel, Wieland, Rauber, Jonas, Tsipras, Dimitris, Goodfellow, Ian, Madry, Aleksander, Kurakin, Alexey
Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of recent work attempting to design defenses that withstand adaptive attacks, few have succeeded; most papers that propose
Externí odkaz:
http://arxiv.org/abs/1902.06705
Autor:
Geirhos, Robert, Temme, Carlos R. Medina, Rauber, Jonas, Schütt, Heiko H., Bethge, Matthias, Wichmann, Felix A.
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human v
Externí odkaz:
http://arxiv.org/abs/1808.08750
Autor:
Brendel, Wieland, Rauber, Jonas, Kurakin, Alexey, Papernot, Nicolas, Veliqi, Behar, Salathé, Marcel, Mohanty, Sharada P., Bethge, Matthias
The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. This document is an updated version of our competition proposal that
Externí odkaz:
http://arxiv.org/abs/1808.01976
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large a
Externí odkaz:
http://arxiv.org/abs/1805.09190