Zobrazeno 1 - 10
of 276
pro vyhledávání: '"57r70"'
Autor:
Grigsby, J. Elisenda, Lindsey, Kathryn
For any fixed feedforward ReLU neural network architecture, it is well-known that many different parameter settings can determine the same function. It is less well-known that the degree of this redundancy is inhomogeneous across parameter space. In
Externí odkaz:
http://arxiv.org/abs/2410.17191
Autor:
Chen, Thomas, Ewald, Patricia Muñoz
We explicitly construct zero loss neural network classifiers. We write the weight matrices and bias vectors in terms of cumulative parameters, which determine truncation maps acting recursively on input space. The configurations for the training data
Externí odkaz:
http://arxiv.org/abs/2405.07098
It is well-known that the Thom polynomial in Stiefel--Whitney classes expressing the cohomology class dual to the locus of the cusp singularity for codimension-$k$ maps and that of the corank-$2$ singularity for codimension-$(k-1)$ maps coincide. The
Externí odkaz:
http://arxiv.org/abs/2403.00332
Autor:
Chen, Thomas
We consider the scenario of supervised learning in Deep Learning (DL) networks, and exploit the arbitrariness of choice in the Riemannian metric relative to which the gradient descent flow can be defined (a general fact of differential geometry). In
Externí odkaz:
http://arxiv.org/abs/2311.15487
Autor:
Chen, Thomas, Ewald, Patricia Muñoz
We analyze geometric aspects of the gradient descent algorithm in Deep Learning (DL) networks. In particular, we prove that the globally minimizing weights and biases for the $\mathcal{L}^2$ cost obtained constructively in [Chen-Munoz Ewald 2023] for
Externí odkaz:
http://arxiv.org/abs/2311.07065
Autor:
Onoue, Fumihiko
We consider hypersurfaces with boundary in $\mathbb{R}^N$ that are the critical points of the fractional area introduced by Paroni, Podio-Guidugli, and Seguin in [R. Paroni, P. Podio-Guidugli, B. Seguin, 2018]. In particular, we study the shape of su
Externí odkaz:
http://arxiv.org/abs/2310.11567
Autor:
Chen, Thomas, Ewald, Patricia Muñoz
In this paper, we approach the problem of cost (loss) minimization in underparametrized shallow neural networks through the explicit construction of upper bounds, without any use of gradient descent. A key focus is on elucidating the geometric struct
Externí odkaz:
http://arxiv.org/abs/2309.10370
Geometric structure of Deep Learning networks and construction of global ${\mathcal L}^2$ minimizers
Autor:
Chen, Thomas, Ewald, Patricia Muñoz
In this paper, we explicitly determine local and global minimizers of the $\mathcal{L}^2$ cost function in underparametrized Deep Learning (DL) networks; our main goal is to shed light on their geometric structure and properties. We accomplish this b
Externí odkaz:
http://arxiv.org/abs/2309.10639
Autor:
Saeki, Osamu
The Reeb space of a continuous function is the space of connected components of the level sets. In this paper we characterize those smooth functions on closed manifolds whose Reeb spaces have the structure of a finite graph. We also give several expl
Externí odkaz:
http://arxiv.org/abs/2308.05953
The parameter space for any fixed architecture of feedforward ReLU neural networks serves as a proxy during training for the associated class of functions - but how faithful is this representation? It is known that many different parameter settings c
Externí odkaz:
http://arxiv.org/abs/2306.06179