Zobrazeno 1 - 10
of 5 830
pro vyhledávání: '"Chevaleyre A"'
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Recent research has explored the memorization capacity of multi-head attention, but these findings are constrained by unrealistic limitations on the context size. We present a novel proof for language-based Transformers that extends the current hypot
Externí odkaz:
http://arxiv.org/abs/2411.10115
Autor:
Bronnec, Florian Le, Verine, Alexandre, Negrevergne, Benjamin, Chevaleyre, Yann, Allauzen, Alexandre
We introduce a novel evaluation framework for Large Language Models (LLMs) such as \textsc{Llama-2} and \textsc{Mistral}, focusing on importing Precision and Recall metrics from image generation to text generation. This approach allows for a nuanced
Externí odkaz:
http://arxiv.org/abs/2402.10693
Rejection sampling methods have recently been proposed to improve the performance of discriminator-based generative models. However, these methods are only optimal under an unlimited sampling budget, and are usually applied to a generator trained ind
Externí odkaz:
http://arxiv.org/abs/2311.00460
Mixtures of classifiers (a.k.a. randomized ensembles) have been proposed as a way to improve robustness against adversarial attacks. However, it has been shown that existing attacks are not well suited for this kind of classifiers. In this paper, we
Externí odkaz:
http://arxiv.org/abs/2307.10788
Achieving a balance between image quality (precision) and diversity (recall) is a significant challenge in the domain of generative models. Current state-of-the-art models primarily rely on optimizing heuristics, such as the Fr\'echet Inception Dista
Externí odkaz:
http://arxiv.org/abs/2305.18910
Autor:
Gnecco-Heredia, Lucas, Chevaleyre, Yann, Negrevergne, Benjamin, Meunier, Laurent, Pydi, Muni Sreenivas
Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conf
Externí odkaz:
http://arxiv.org/abs/2302.07221
Generative models can have distinct mode of failures like mode dropping and low quality samples, which cannot be captured by a single scalar metric. To address this, recent works propose evaluating generative models using precision and recall, where
Externí odkaz:
http://arxiv.org/abs/2302.00628
Autor:
Edward Ousselin
Publikováno v:
French Studies. 76:698-699