Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Petrini, Leonardo"'
Autor:
Petrini, Leonardo
Artificial intelligence, particularly the subfield of machine learning, has seen a paradigm shift towards data-driven models that learn from and adapt to data. This has resulted in unprecedented advancements in various domains such as natural languag
Externí odkaz:
http://arxiv.org/abs/2310.16154
Autor:
Cagnetta, Francesco, Petrini, Leonardo, Tomasini, Umberto M., Favero, Alessandro, Wyart, Matthieu
Publikováno v:
Phys. Rev. X 14, 031001 (2024)
Deep learning algorithms demonstrate a surprising ability to learn high-dimensional tasks from limited examples. This is commonly attributed to the depth of neural networks, enabling them to build a hierarchy of abstract, low-dimensional data represe
Externí odkaz:
http://arxiv.org/abs/2307.02129
A central question of machine learning is how deep nets manage to learn tasks in high dimensions. An appealing hypothesis is that they achieve this feat by building a representation of the data where information irrelevant to the task is lost. For im
Externí odkaz:
http://arxiv.org/abs/2210.01506
It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge: for exampl
Externí odkaz:
http://arxiv.org/abs/2206.12314
Understanding why deep nets can classify data in large dimensions remains a challenge. It has been proposed that they do so by becoming stable to diffeomorphisms, yet existing empirical measurements support that it is often not the case. We revisit t
Externí odkaz:
http://arxiv.org/abs/2105.02468
Deep learning algorithms are responsible for a technological revolution in a variety of tasks including image recognition or Go playing. Yet, why they work is not understood. Ultimately, they manage to classify data lying in high dimension -- a feat
Externí odkaz:
http://arxiv.org/abs/2012.15110
Publikováno v:
Journal of Statistical Mechanics: Theory and Experiment, Volume 2021, April 2021
We study how neural networks compress uninformative input space in models where data lie in $d$ dimensions, but whose label only vary within a linear manifold of dimension $d_\parallel < d$. We show that for a one-hidden layer network initialized wit
Externí odkaz:
http://arxiv.org/abs/2007.11471
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
In Physics Reports 15 August 2021 924:1-18
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.