Zobrazeno 1 - 10
of 212
pro vyhledávání: '"MALATESTA, Enrico"'
We analyze the problem of storing random pattern-label associations using two classes of continuous non-convex weights models, namely the perceptron with negative margin and an infinite-width two-layer neural network with non-overlapping receptive fi
Externí odkaz:
http://arxiv.org/abs/2410.06717
Autor:
Lauditi, Clarissa, Malatesta, Enrico M., Pittorino, Fabrizio, Baldassi, Carlo, Brunel, Nicolas, Zecchina, Riccardo
Multiple neurophysiological experiments have shown that dendritic non-linearities can have a strong influence on synaptic input integration. In this work we model a single neuron as a two-layer computational unit with non-overlapping sign-constrained
Externí odkaz:
http://arxiv.org/abs/2407.07572
Autor:
Kalaj, Silvio, Lauditi, Clarissa, Perugini, Gabriele, Lucibello, Carlo, Malatesta, Enrico M., Negri, Matteo
It has been recently shown that a learning transition happens when a Hopfield Network stores examples generated as superpositions of random features, where new attractors corresponding to such features appear in the model. In this work we reveal that
Externí odkaz:
http://arxiv.org/abs/2407.05658
Autor:
Giorgini, Ludovico T., Jentschura, Ulrich D., Malatesta, Enrico M., Rizzo, Tommaso, Zinn-Justin, Jean
Publikováno v:
Phys.Rev.D 110 (2024) 036003
We discuss numerical aspects of instantons in two- and three-dimensional $\phi^4$ theories with an internal $O(N)$ symmetry group, the so-called $N$-vector model. Combining asymptotic transseries expansions for large argument with convergence acceler
Externí odkaz:
http://arxiv.org/abs/2405.18191
Recent works demonstrated the existence of a double-descent phenomenon for the generalization error of neural networks, where highly overparameterized models escape overfitting and achieve good test performance, at odds with the standard bias-varianc
Externí odkaz:
http://arxiv.org/abs/2401.12610
Autor:
Malatesta, Enrico M.
In these pedagogic notes I review the statistical mechanics approach to neural networks, focusing on the paradigmatic example of the perceptron architecture with binary an continuous weights, in the classification setting. I will review the Gardner's
Externí odkaz:
http://arxiv.org/abs/2309.09240
Autor:
Annesi, Brandon Livio, Lauditi, Clarissa, Lucibello, Carlo, Malatesta, Enrico M., Perugini, Gabriele, Pittorino, Fabrizio, Saglietti, Luca
Empirical studies on the landscape of neural networks have shown that low-energy configurations are often found in complex connected structures, where zero-energy paths between pairs of distant solutions can be constructed. Here we consider the spher
Externí odkaz:
http://arxiv.org/abs/2305.10623
We study the binary and continuous negative-margin perceptrons as simple non-convex neural network models learning random rules and associations. We analyze the geometry of the landscape of solutions in both models and find important similarities and
Externí odkaz:
http://arxiv.org/abs/2304.13871
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities. Inspired by the manifold hypothesis in machine learning, we propose and
Externí odkaz:
http://arxiv.org/abs/2303.16880
Publikováno v:
Britannica Online
Externí odkaz:
http://school.eb.com/levels/middle/article/328937