Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Leclerc, Guillaume"'
Autor:
Leclerc, Guillaume
Deep learning computer vision systems, integral to technologies such as self-driving cars, facial recognition, and content moderation, require robustness against diverse perturbations to ensure reliability and safety. Examples of such perturbations i
Autor:
Khaddaj, Alaa, Leclerc, Guillaume, Makelov, Aleksandar, Georgiev, Kristian, Salman, Hadi, Ilyas, Andrew, Madry, Aleksander
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation. Defending against such attacks typically involves viewing these inserted examples as outl
Externí odkaz:
http://arxiv.org/abs/2307.10163
Autor:
Leclerc, Guillaume, Ilyas, Andrew, Engstrom, Logan, Park, Sung Min, Salman, Hadi, Madry, Aleksander
We present FFCV, a library for easy and fast machine learning model training. FFCV speeds up model training by eliminating (often subtle) data bottlenecks from the training process. In particular, we combine techniques such as an efficient file stora
Externí odkaz:
http://arxiv.org/abs/2306.12517
The goal of data attribution is to trace model predictions back to training data. Despite a long line of work towards this goal, existing approaches to data attribution tend to force users to choose between computational tractability and efficacy. Th
Externí odkaz:
http://arxiv.org/abs/2303.14186
We present an approach to mitigating the risks of malicious image editing posed by large diffusion models. The key idea is to immunize images so as to make them resistant to manipulation by these models. This immunization relies on injection of imper
Externí odkaz:
http://arxiv.org/abs/2302.06588
Autor:
Guo, Chong, Lee, Michael J., Leclerc, Guillaume, Dapello, Joel, Rao, Yug, Madry, Aleksander, DiCarlo, James J.
Visual systems of primates are the gold standard of robust perception. There is thus a general belief that mimicking the neural representations that underlie those systems will yield artificial visual systems that are adversarially robust. In this wo
Externí odkaz:
http://arxiv.org/abs/2206.11228
We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data. For any fixed "target" example $x$, training set $S$, and learning algorithm, a datamodel is a parameterized function $2^S \to
Externí odkaz:
http://arxiv.org/abs/2202.00622
Autor:
Leclerc, Guillaume, Salman, Hadi, Ilyas, Andrew, Vemprala, Sai, Engstrom, Logan, Vineet, Vibhav, Xiao, Kai, Zhang, Pengchuan, Santurkar, Shibani, Yang, Greg, Kapoor, Ashish, Madry, Aleksander
We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation. We demonstrate, through a wide range of use cases, that 3DB allows users to discover vulnerabilities in computer vision system
Externí odkaz:
http://arxiv.org/abs/2106.03805
Autor:
Saligrama, Aditya, Leclerc, Guillaume
A necessary characteristic for the deployment of deep learning models in real world applications is resistance to small adversarial perturbations while maintaining accuracy on non-malicious inputs. While robust training provides models that exhibit b
Externí odkaz:
http://arxiv.org/abs/2002.11572
Autor:
Leclerc, Guillaume, Madry, Aleksander
Learning rate schedule has a major impact on the performance of deep learning models. Still, the choice of a schedule is often heuristical. We aim to develop a precise understanding of the effects of different learning rate schedules and the appropri
Externí odkaz:
http://arxiv.org/abs/2002.10376