Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Lauditi, Clarissa"'
Autor:
Lauditi, Clarissa, Malatesta, Enrico M., Pittorino, Fabrizio, Baldassi, Carlo, Brunel, Nicolas, Zecchina, Riccardo
Multiple neurophysiological experiments have shown that dendritic non-linearities can have a strong influence on synaptic input integration. In this work we model a single neuron as a two-layer computational unit with non-overlapping sign-constrained
Externí odkaz:
http://arxiv.org/abs/2407.07572
Autor:
Kalaj, Silvio, Lauditi, Clarissa, Perugini, Gabriele, Lucibello, Carlo, Malatesta, Enrico M., Negri, Matteo
It has been recently shown that a learning transition happens when a Hopfield Network stores examples generated as superpositions of random features, where new attractors corresponding to such features appear in the model. In this work we reveal that
Externí odkaz:
http://arxiv.org/abs/2407.05658
In recent years statistical physics has proven to be a valuable tool to probe into large dimensional inference problems such as the ones occurring in machine learning. Statistical physics provides analytical tools to study fundamental limitations in
Externí odkaz:
http://arxiv.org/abs/2306.16097
Autor:
Annesi, Brandon Livio, Lauditi, Clarissa, Lucibello, Carlo, Malatesta, Enrico M., Perugini, Gabriele, Pittorino, Fabrizio, Saglietti, Luca
Empirical studies on the landscape of neural networks have shown that low-energy configurations are often found in complex connected structures, where zero-energy paths between pairs of distant solutions can be constructed. Here we consider the spher
Externí odkaz:
http://arxiv.org/abs/2305.10623
The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities. Inspired by the manifold hypothesis in machine learning, we propose and
Externí odkaz:
http://arxiv.org/abs/2303.16880
Autor:
Baldassi, Carlo, Lauditi, Clarissa, Malatesta, Enrico M., Pacelli, Rosalba, Perugini, Gabriele, Zecchina, Riccardo
Current deep neural networks are highly overparameterized (up to billions of connection weights) and nonlinear. Yet they can fit data almost perfectly through variants of gradient descent algorithms and achieve unexpected levels of prediction accurac
Externí odkaz:
http://arxiv.org/abs/2110.00683
Autor:
Baldassi, Carlo, Lauditi, Clarissa, Malatesta, Enrico M., Perugini, Gabriele, Zecchina, Riccardo
The success of deep learning has revealed the application potential of neural networks across the sciences and opened up fundamental theoretical problems. In particular, the fact that learning algorithms based on simple variants of gradient methods a
Externí odkaz:
http://arxiv.org/abs/2107.01163
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.