Zobrazeno 1 - 10
of 67
pro vyhledávání: '"Ilyas, Andrew"'
Autor:
Jain, Saachi, Hamidieh, Kimia, Georgiev, Kristian, Ilyas, Andrew, Ghassemi, Marzyeh, Madry, Aleksander
Machine learning models can fail on subgroups that are underrepresented during training. While techniques such as dataset balancing can improve performance on underperforming groups, they require access to training group annotations and can end up re
Externí odkaz:
http://arxiv.org/abs/2406.16846
Most modern recommendation algorithms are data-driven: they generate personalized recommendations by observing users' past behaviors. A common assumption in recommendation is that how a user interacts with a piece of content (e.g., whether they choos
Externí odkaz:
http://arxiv.org/abs/2405.05596
How does the internal computation of a machine learning model transform inputs into predictions? In this paper, we introduce a task called component modeling that aims to address this question. The goal of component modeling is to decompose an ML mod
Externí odkaz:
http://arxiv.org/abs/2404.11534
Many human-facing algorithms -- including those that power recommender systems or hiring decision tools -- are trained on data provided by their users. The developers of these algorithms commonly adopt the assumption that the data generating process
Externí odkaz:
http://arxiv.org/abs/2312.17666
Autor:
Khaddaj, Alaa, Leclerc, Guillaume, Makelov, Aleksandar, Georgiev, Kristian, Salman, Hadi, Ilyas, Andrew, Madry, Aleksander
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation. Defending against such attacks typically involves viewing these inserted examples as outl
Externí odkaz:
http://arxiv.org/abs/2307.10163
Autor:
Leclerc, Guillaume, Ilyas, Andrew, Engstrom, Logan, Park, Sung Min, Salman, Hadi, Madry, Aleksander
We present FFCV, a library for easy and fast machine learning model training. FFCV speeds up model training by eliminating (often subtle) data bottlenecks from the training process. In particular, we combine techniques such as an efficient file stora
Externí odkaz:
http://arxiv.org/abs/2306.12517
The goal of data attribution is to trace model predictions back to training data. Despite a long line of work towards this goal, existing approaches to data attribution tend to force users to choose between computational tractability and efficacy. Th
Externí odkaz:
http://arxiv.org/abs/2303.14186
We present an approach to mitigating the risks of malicious image editing posed by large diffusion models. The key idea is to immunize images so as to make them resistant to manipulation by these models. This immunization relies on injection of imper
Externí odkaz:
http://arxiv.org/abs/2302.06588
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms. We begin by formalizing this goal as one of finding distinguishing feature transformations,
Externí odkaz:
http://arxiv.org/abs/2211.12491
Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside. In this work, we demonstrate that there can exist a downside after all: bias transfer, or th
Externí odkaz:
http://arxiv.org/abs/2207.02842