Zobrazeno 1 - 10
of 29
pro vyhledávání: '"Parhi, Rahul"'
We consider a large class of shallow neural networks with randomly initialized parameters and rectified linear unit activation functions. We prove that these random neural networks are well-defined non-Gaussian processes. As a by-product, we demonstr
Externí odkaz:
http://arxiv.org/abs/2405.10229
Autor:
Parhi, Rahul, Unser, Michael
We investigate the function-space optimality (specifically, the Banach-space optimality) of a large class of shallow neural architectures with multivariate nonlinearities/activation functions. To that end, we construct a new family of Banach spaces d
Externí odkaz:
http://arxiv.org/abs/2310.03696
Autor:
Parhi, Rahul, Unser, Michael
Publikováno v:
SIAM Journal on Mathematical Analysis, vol. 56, no. 4, pp. 4662-4686, 2024
We investigate the distributional extension of the $k$-plane transform in $\mathbb{R}^d$ and of related operators. We parameterize the $k$-plane domain as the Cartesian product of the Stiefel manifold of orthonormal $k$-frames in $\mathbb{R}^d$ with
Externí odkaz:
http://arxiv.org/abs/2310.01233
We investigate the approximation of functions $f$ on a bounded domain $\Omega\subset \mathbb{R}^d$ by the outputs of single-hidden-layer ReLU neural networks of width $n$. This form of nonlinear $n$-term dictionary approximation has been intensely st
Externí odkaz:
http://arxiv.org/abs/2307.15772
Publikováno v:
Journal of Machine Learning Research, vol. 25, no. 231, pp. 1-40, 2024
This paper introduces a novel theoretical framework for the analysis of vector-valued neural networks through the development of vector-valued variation spaces, a new class of reproducing kernel Banach spaces. These spaces emerge from studying the re
Externí odkaz:
http://arxiv.org/abs/2305.16534
Autor:
Parhi, Rahul, Nowak, Robert D.
Publikováno v:
IEEE Signal Processing Magazine, vol. 40, no. 6, pp. 63-74, Sept. 2023
Deep learning has been wildly successful in practice and most state-of-the-art machine learning methods are based on neural networks. Lacking, however, is a rigorous mathematical theory that adequately explains the amazing performance of deep neural
Externí odkaz:
http://arxiv.org/abs/2301.09554
Autor:
Parhi, Rahul, Nowak, Robert D.
Publikováno v:
IEEE Transactions on Information Theory, vol. 69, no. 2, pp. 1125-1140, Feb. 2023
We study the problem of estimating an unknown function from noisy data using shallow ReLU neural networks. The estimators we study minimize the sum of squared data-fitting errors plus a regularization term proportional to the squared Euclidean norm o
Externí odkaz:
http://arxiv.org/abs/2109.08844
Autor:
Parhi, Rahul, Nowak, Robert D.
Publikováno v:
SIAM Journal on Mathematics of Data Science, vol. 4, no. 2, pp. 464-489, 2022
We develop a variational framework to understand the properties of functions learned by fitting deep neural networks with rectified linear unit activations to data. We propose a new function space, which is reminiscent of classical bounded variation-
Externí odkaz:
http://arxiv.org/abs/2105.03361
Autor:
Parhi, Rahul, Nowak, Robert D.
Publikováno v:
Journal of Machine Learning Research, vol. 22, no. 43, pp. 1-40, 2021
We develop a variational framework to understand the properties of the functions learned by neural networks fit to data. We propose and study a family of continuous-domain linear inverse problems with total variation-like regularization in the Radon
Externí odkaz:
http://arxiv.org/abs/2006.05626
Autor:
Parhi, Rahul, Nowak, Robert D.
Publikováno v:
IEEE Signal Processing Letters, vol. 27, pp. 1779-1783, 2020
A wide variety of activation functions have been proposed for neural networks. The Rectified Linear Unit (ReLU) is especially popular today. There are many practical reasons that motivate the use of the ReLU. This paper provides new theoretical chara
Externí odkaz:
http://arxiv.org/abs/1910.02333