Zobrazeno 1 - 10
of 146
pro vyhledávání: '"Siegel, Jonathan P."'
Kolmogorov-Arnold Networks (KAN) \cite{liu2024kan} were very recently proposed as a potential alternative to the prevalent architectural backbone of many deep learning models, the multi-layer perceptron (MLP). KANs have seen success in various tasks
Externí odkaz:
http://arxiv.org/abs/2410.01803
Let $\Omega\subset \mathbb{R}^d$ be a bounded domain. We consider the problem of how efficiently shallow neural networks with the ReLU$^k$ activation function can approximate functions from Sobolev spaces $W^s(L_p(\Omega))$ with error measured in the
Externí odkaz:
http://arxiv.org/abs/2408.10996
We provide an a priori analysis of a certain class of numerical methods, commonly referred to as collocation methods, for solving elliptic boundary value problems. They begin with information in the form of point values of the right side f of such eq
Externí odkaz:
http://arxiv.org/abs/2406.09217
Publikováno v:
Computational Materials Science, Volume 247, 2025, 113495
Structure-informed materials informatics is a rapidly evolving discipline of materials science relying on the featurization of atomic structures or configurations to construct vector, voxel, graph, graphlet, and other representations useful for machi
Externí odkaz:
http://arxiv.org/abs/2404.02849
Canonicalization provides an architecture-agnostic method for enforcing equivariance, with generalizations such as frame-averaging recently gaining prominence as a lightweight and flexible alternative to equivariant architectures. Recent works have f
Externí odkaz:
http://arxiv.org/abs/2402.16077
Autor:
Siegel, Jonathan W.
We consider the problem of determining the manifold $n$-widths of Sobolev and Besov spaces with error measured in the $L_p$-norm. The manifold widths control how efficiently these spaces can be approximated by general non-linear parametric methods wi
Externí odkaz:
http://arxiv.org/abs/2402.04407
We consider gradient flow/gradient descent and heavy ball/accelerated gradient descent optimization for convex objective functions. In the gradient flow case, we prove the following: 1. If $f$ does not have a minimizer, the convergence $f(x_t)\to \in
Externí odkaz:
http://arxiv.org/abs/2310.17610
Publikováno v:
Applied and Computational Harmonic Analysis, vol. 74, no. 101713, pp. 1-22, 2025
We investigate the approximation of functions $f$ on a bounded domain $\Omega\subset \mathbb{R}^d$ by the outputs of single-hidden-layer ReLU neural networks of width $n$. This form of nonlinear $n$-term dictionary approximation has been intensely st
Externí odkaz:
http://arxiv.org/abs/2307.15772
Autor:
Siegel, Jonathan W.
We study the following two related problems. The first is to determine to what error an arbitrary zonoid in $\mathbb{R}^{d+1}$ can be approximated in the Hausdorff distance by a sum of $n$ line segments. The second is to determine optimal approximati
Externí odkaz:
http://arxiv.org/abs/2307.15285
We study the fundamental limits of matching pursuit, or the pure greedy algorithm, for approximating a target function $ f $ by a linear combination $f_n$ of $n$ elements from a dictionary. When the target function is contained in the variation space
Externí odkaz:
http://arxiv.org/abs/2307.07679