Zobrazeno 1 - 10
of 55
pro vyhledávání: '"Perekrestenko, A."'
We show that every $d$-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a $1$-dimensional uniform input distribution. What is more, this is possible without incurring a cost - in terms of appr
Externí odkaz:
http://arxiv.org/abs/2107.12466
We present an explicit deep neural network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional Lipschitz-continuous target distribution. The key ingredient of our des
Externí odkaz:
http://arxiv.org/abs/2006.16664
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data. Concretely, we consider Kolmogorov-optimal approxima
Externí odkaz:
http://arxiv.org/abs/1901.02220
We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (B\"olcskei et al., 2018) of polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal fun
Externí odkaz:
http://arxiv.org/abs/1806.01528
We propose two deep neural network architectures for classification of arbitrary-length electrocardiogram (ECG) recordings and evaluate them on the atrial fibrillation (AF) classification data set provided by the PhysioNet/CinC Challenge 2017. The fi
Externí odkaz:
http://arxiv.org/abs/1710.06122
Coordinate descent methods employ random partial updates of decision variables in order to solve huge-scale convex optimization problems. In this work, we introduce new adaptive rules for the random selection of their updates. By adaptive, we mean th
Externí odkaz:
http://arxiv.org/abs/1703.02518
Publikováno v:
IEEE Transactions on Information Theory, 67 (5)
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data. Concretely, we consider Kolmogorov-optimal approxima
Publikováno v:
Heart Rhythm. 19:S204-S205
Publikováno v:
Partial Differential Equations and Applications, 2 (5)
We show that every d-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a 1-dimensional uniform input distribution. What is more, this is possible without incurring a cost—in terms of approxim
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a1705227de61b7c53ee801424b8a467a
Autor:
Perekrestenko, Dmytro
The first part of this thesis develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data. Concretely, we consider Kolmogoro
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::262300a14642eb4c77b5cc50af32f1ed