Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Perugini, Gabriele"'
Autor:
Kalaj, Silvio, Lauditi, Clarissa, Perugini, Gabriele, Lucibello, Carlo, Malatesta, Enrico M., Negri, Matteo
It has been recently shown that a learning transition happens when a Hopfield Network stores examples generated as superpositions of random features, where new attractors corresponding to such features appear in the model. In this work we reveal that
Externí odkaz:
http://arxiv.org/abs/2407.05658
Autor:
Annesi, Brandon Livio, Lauditi, Clarissa, Lucibello, Carlo, Malatesta, Enrico M., Perugini, Gabriele, Pittorino, Fabrizio, Saglietti, Luca
Empirical studies on the landscape of neural networks have shown that low-energy configurations are often found in complex connected structures, where zero-energy paths between pairs of distant solutions can be constructed. Here we consider the spher
Externí odkaz:
http://arxiv.org/abs/2305.10623
We study the binary and continuous negative-margin perceptrons as simple non-convex neural network models learning random rules and associations. We analyze the geometry of the landscape of solutions in both models and find important similarities and
Externí odkaz:
http://arxiv.org/abs/2304.13871
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities. Inspired by the manifold hypothesis in machine learning, we propose and
Externí odkaz:
http://arxiv.org/abs/2303.16880
Autor:
Pittorino, Fabrizio, Ferraro, Antonio, Perugini, Gabriele, Feinauer, Christoph, Baldassi, Carlo, Zecchina, Riccardo
We systematize the approach to the investigation of deep neural network landscapes by basing it on the geometry of the space of implemented functions rather than the space of parameters. Grouping classifiers into equivalence classes, we develop a sta
Externí odkaz:
http://arxiv.org/abs/2202.03038
Publikováno v:
Mach. Learn.: Sci. Technol. 3 035005 (2022)
Message-passing algorithms based on the Belief Propagation (BP) equations constitute a well-known distributed computational scheme. It is exact on tree-like graphical models and has also proven to be effective in many problems defined on graphs with
Externí odkaz:
http://arxiv.org/abs/2110.14583
Autor:
Baldassi, Carlo, Lauditi, Clarissa, Malatesta, Enrico M., Pacelli, Rosalba, Perugini, Gabriele, Zecchina, Riccardo
Current deep neural networks are highly overparameterized (up to billions of connection weights) and nonlinear. Yet they can fit data almost perfectly through variants of gradient descent algorithms and achieve unexpected levels of prediction accurac
Externí odkaz:
http://arxiv.org/abs/2110.00683
Autor:
Baldassi, Carlo, Lauditi, Clarissa, Malatesta, Enrico M., Perugini, Gabriele, Zecchina, Riccardo
The success of deep learning has revealed the application potential of neural networks across the sciences and opened up fundamental theoretical problems. In particular, the fact that learning algorithms based on simple variants of gradient methods a
Externí odkaz:
http://arxiv.org/abs/2107.01163
Autor:
Pittorino, Fabrizio, Lucibello, Carlo, Feinauer, Christoph, Perugini, Gabriele, Baldassi, Carlo, Demyanenko, Elizaveta, Zecchina, Riccardo
The properties of flat minima in the empirical risk landscape of neural networks have been debated for some time. Increasing evidence suggests they possess better generalization capabilities with respect to sharp ones. First, we discuss Gaussian mixt
Externí odkaz:
http://arxiv.org/abs/2006.07897
Publikováno v:
Phys. Rev. E 97, 012152 (2018)
We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of maximal solutions for the BP equations a
Externí odkaz:
http://arxiv.org/abs/1710.05396