Zobrazeno 1 - 10
of 1 111
pro vyhledávání: '"Chevaleyre A"'
Recent research has explored the memorization capacity of multi-head attention, but these findings are constrained by unrealistic limitations on the context size. We present a novel proof for language-based Transformers that extends the current hypot
Externí odkaz:
http://arxiv.org/abs/2411.10115
Autor:
Bronnec, Florian Le, Verine, Alexandre, Negrevergne, Benjamin, Chevaleyre, Yann, Allauzen, Alexandre
We introduce a novel evaluation framework for Large Language Models (LLMs) such as \textsc{Llama-2} and \textsc{Mistral}, focusing on importing Precision and Recall metrics from image generation to text generation. This approach allows for a nuanced
Externí odkaz:
http://arxiv.org/abs/2402.10693
Rejection sampling methods have recently been proposed to improve the performance of discriminator-based generative models. However, these methods are only optimal under an unlimited sampling budget, and are usually applied to a generator trained ind
Externí odkaz:
http://arxiv.org/abs/2311.00460
Mixtures of classifiers (a.k.a. randomized ensembles) have been proposed as a way to improve robustness against adversarial attacks. However, it has been shown that existing attacks are not well suited for this kind of classifiers. In this paper, we
Externí odkaz:
http://arxiv.org/abs/2307.10788
Achieving a balance between image quality (precision) and diversity (recall) is a significant challenge in the domain of generative models. Current state-of-the-art models primarily rely on optimizing heuristics, such as the Fr\'echet Inception Dista
Externí odkaz:
http://arxiv.org/abs/2305.18910
Autor:
Gnecco-Heredia, Lucas, Chevaleyre, Yann, Negrevergne, Benjamin, Meunier, Laurent, Pydi, Muni Sreenivas
Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conf
Externí odkaz:
http://arxiv.org/abs/2302.07221
Generative models can have distinct mode of failures like mode dropping and low quality samples, which cannot be captured by a single scalar metric. To address this, recent works propose evaluating generative models using precision and recall, where
Externí odkaz:
http://arxiv.org/abs/2302.00628
Randomized smoothing is the dominant standard for provable defenses against adversarial examples. Nevertheless, this method has recently been proven to suffer from important information theoretic limitations. In this paper, we argue that these limita
Externí odkaz:
http://arxiv.org/abs/2206.01715
We propose the first regret-based approach to the Graphical Bilinear Bandits problem, where $n$ agents in a graph play a stochastic bilinear bandit game with each of their neighbors. This setting reveals a combinatorial NP-hard problem that prevents
Externí odkaz:
http://arxiv.org/abs/2206.00466
In this paper, we study the problem of consistency in the context of adversarial examples. Specifically, we tackle the following question: can surrogate losses still be used as a proxy for minimizing the $0/1$ loss in the presence of an adversary tha
Externí odkaz:
http://arxiv.org/abs/2205.10022