Zobrazeno 1 - 10
of 2 943
pro vyhledávání: '"A. Zecchina"'
Autor:
Lauditi, Clarissa, Malatesta, Enrico M., Pittorino, Fabrizio, Baldassi, Carlo, Brunel, Nicolas, Zecchina, Riccardo
Multiple neurophysiological experiments have shown that dendritic non-linearities can have a strong influence on synaptic input integration. In this work we model a single neuron as a two-layer computational unit with non-overlapping sign-constrained
Externí odkaz:
http://arxiv.org/abs/2407.07572
Proteins populate a manifold in the high-dimensional sequence space whose geometrical structure guides their natural evolution. Leveraging recently-developed structure prediction tools based on transformer models, we first examine the protein sequenc
Externí odkaz:
http://arxiv.org/abs/2311.06034
We study the binary and continuous negative-margin perceptrons as simple non-convex neural network models learning random rules and associations. We analyze the geometry of the landscape of solutions in both models and find important similarities and
Externí odkaz:
http://arxiv.org/abs/2304.13871
Artificial networks have been studied through the prism of statistical mechanics as disordered systems since the 80s, starting from the simple models of Hopfield's associative memory and the single-neuron perceptron classifier. Assuming data is gener
Externí odkaz:
http://arxiv.org/abs/2304.06636
Autor:
Pittorino, Fabrizio, Ferraro, Antonio, Perugini, Gabriele, Feinauer, Christoph, Baldassi, Carlo, Zecchina, Riccardo
We systematize the approach to the investigation of deep neural network landscapes by basing it on the geometry of the space of implemented functions rather than the space of parameters. Grouping classifiers into equivalence classes, we develop a sta
Externí odkaz:
http://arxiv.org/abs/2202.03038
We apply digitized Quantum Annealing (QA) and Quantum Approximate Optimization Algorithm (QAOA) to a paradigmatic task of supervised learning in artificial neural networks: the optimization of synaptic weights for the binary perceptron. At variance w
Externí odkaz:
http://arxiv.org/abs/2112.10219
The differing ability of polypeptide conformations to act as the native state of proteins has long been rationalized in terms of differing kinetic accessibility or thermodynamic stability. Building on the successful applications of physical concepts
Externí odkaz:
http://arxiv.org/abs/2111.12987
Publikováno v:
Mach. Learn.: Sci. Technol. 3 035005 (2022)
Message-passing algorithms based on the Belief Propagation (BP) equations constitute a well-known distributed computational scheme. It is exact on tree-like graphical models and has also proven to be effective in many problems defined on graphs with
Externí odkaz:
http://arxiv.org/abs/2110.14583
Autor:
Baldassi, Carlo, Lauditi, Clarissa, Malatesta, Enrico M., Pacelli, Rosalba, Perugini, Gabriele, Zecchina, Riccardo
Current deep neural networks are highly overparameterized (up to billions of connection weights) and nonlinear. Yet they can fit data almost perfectly through variants of gradient descent algorithms and achieve unexpected levels of prediction accurac
Externí odkaz:
http://arxiv.org/abs/2110.00683
Autor:
Baldassi, Carlo, Lauditi, Clarissa, Malatesta, Enrico M., Perugini, Gabriele, Zecchina, Riccardo
The success of deep learning has revealed the application potential of neural networks across the sciences and opened up fundamental theoretical problems. In particular, the fact that learning algorithms based on simple variants of gradient methods a
Externí odkaz:
http://arxiv.org/abs/2107.01163