Zobrazeno 1 - 10
of 2 058
pro vyhledávání: '"Abbe, P."'
Autor:
Whitford, Abbé M., Rivera-Morales, Hugo, Howlett, Cullan, Vargas-Magaña, Mariana, Fromenteau, Sébastien, Davis, Tamara M., Pérez-Fernández, Alejandro, de Mattia, Arnaud, Ahlen, Steven, Bianchi, Davide, Brooks, David, Burtin, Etienne, Claybaugh, Todd, de la Macorra, Axel, Doel, Peter, Ferraro, Simone, Forero-Romero, Jaime E., Gaztañaga, Enrique, Gontcho, Satya Gontcho A, Gutierrez, Gaston, Juneau, Stephanie, Kehoe, Robert, Kirkby, David, Kisner, Theodore, Koposov, Sergey, Landriau, Martin, Guillou, Laurent Le, Meisner, Aaron, Miquel, Ramon, Prada, Francisco, Pérez-Ràfols, Ignasi, Rossi, Graziano, Sanchez, Eusebio, Schubnell, Michael, Sprayberry, David, Tarlé, Gregory, Weaver, Benjamin Alan, Zarrouk, Pauline, Zou, Hu
In the early Universe, neutrinos decouple quickly from the primordial plasma and propagate without further interactions. The impact of free-streaming neutrinos is to create a temporal shift in the gravitational potential that impacts the acoustic wav
Externí odkaz:
http://arxiv.org/abs/2412.05990
Parities have become a standard benchmark for evaluating learning algorithms. Recent works show that regular neural networks trained by gradient descent can efficiently learn degree $k$ parities on uniform inputs for constant $k$, but fail to do so w
Externí odkaz:
http://arxiv.org/abs/2412.04910
In 1948, Shannon used a probabilistic argument to show the existence of codes achieving a maximal rate defined by the channel capacity. In 1954, Muller and Reed introduced a simple deterministic code construction, based on polynomial evaluations, con
Externí odkaz:
http://arxiv.org/abs/2411.13493
Learning with identical train and test distributions has been extensively investigated both practically and theoretically. Much remains to be understood, however, in statistical learning under distribution shifts. This paper focuses on a distribution
Externí odkaz:
http://arxiv.org/abs/2410.23461
Modern vision models have achieved remarkable success in benchmarks where local features provide critical information about the target. There is now a growing interest in solving tasks that require more global reasoning, where local features offer no
Externí odkaz:
http://arxiv.org/abs/2410.08165
Can Transformers predict new syllogisms by composing established ones? More generally, what type of targets can be learned by such models from scratch? Recent works show that Transformers can be Turing-complete in terms of expressivity, but this does
Externí odkaz:
http://arxiv.org/abs/2406.06467
We investigate the out-of-domain generalization of random feature (RF) models and Transformers. We first prove that in the `generalization on the unseen (GOTU)' setting, where training data is fully seen in some part of the domain but testing is made
Externí odkaz:
http://arxiv.org/abs/2406.06354
Autor:
Abbe, Emmanuel, Sandon, Colin
This paper shows that a class of codes such as Reed-Muller (RM) codes have vanishing bit-error probability below capacity on symmetric channels. The proof relies on the notion of `camellia codes': a class of symmetric codes decomposable into `camelli
Externí odkaz:
http://arxiv.org/abs/2312.04329
Autor:
Boix-Adsera, Enric, Saremi, Omid, Abbe, Emmanuel, Bengio, Samy, Littwin, Etai, Susskind, Joshua
We investigate the capabilities of transformer models on relational reasoning tasks. In these tasks, models are trained on a set of strings encoding abstract relations, and are then tested out-of-distribution on data that contains symbols that did no
Externí odkaz:
http://arxiv.org/abs/2310.09753
In this work, we introduce Boolformer, the first Transformer architecture trained to perform end-to-end symbolic regression of Boolean functions. First, we show that it can predict compact formulas for complex functions which were not seen during tra
Externí odkaz:
http://arxiv.org/abs/2309.12207