Zobrazeno 1 - 10
of 151
pro vyhledávání: '"Batselier, Kim"'
Autor:
Wesel, Frederiek, Batselier, Kim
The ability to express a learning task in terms of a primal and a dual optimization problem lies at the core of a plethora of machine learning methods. For example, Support Vector Machine (SVM), Least-Squares Support Vector Machine (LS-SVM), Ridge Re
Externí odkaz:
http://arxiv.org/abs/2410.10504
The state-of-the-art tensor network Kalman filter lifts the curse of dimensionality for high-dimensional recursive estimation problems. However, the required rounding operation can cause filter divergence due to the loss of positive definiteness of c
Externí odkaz:
http://arxiv.org/abs/2409.03276
Autor:
Batselier, Kim
Specifying a prior distribution is an essential part of solving Bayesian inverse problems. The prior encodes a belief on the nature of the solution and this regularizes the problem. In this article we completely characterize a Gaussian prior that enc
Externí odkaz:
http://arxiv.org/abs/2406.17597
Autor:
Wesel, Frederiek, Batselier, Kim
Tensor Networks (TNs) have recently been used to speed up kernel machines by constraining the model weights, yielding exponential computational and storage savings. In this paper we prove that the outputs of Canonical Polyadic Decomposition (CPD) and
Externí odkaz:
http://arxiv.org/abs/2403.19500
Compressed sensing (CS) techniques demand significant storage and computational resources, when recovering high-dimensional sparse signals. Block CS (BCS), a special class of CS, addresses both the storage and complexity issues by partitioning the sp
Externí odkaz:
http://arxiv.org/abs/2403.04688
This paper presents a method for approximate Gaussian process (GP) regression with tensor networks (TNs). A parametric approximation of a GP uses a linear combination of basis functions, where the accuracy of the approximation depends on the total nu
Externí odkaz:
http://arxiv.org/abs/2310.20630
Autor:
Wesel, Frederiek, Batselier, Kim
In the context of kernel machines, polynomial and Fourier features are commonly used to provide a nonlinear extension to linear models by mapping the data to a higher-dimensional space. Unless one considers the dual formulation of the learning proble
Externí odkaz:
http://arxiv.org/abs/2309.05436
How Informative is the Approximation Error from Tensor Decomposition for Neural Network Compression?
Tensor decompositions have been successfully applied to compress neural networks. The compression algorithms using tensor decompositions commonly minimize the approximation error on the weights. Recent work assumes the approximation error on the weig
Externí odkaz:
http://arxiv.org/abs/2305.05318
For the first time, this position paper introduces a fundamental link between tensor networks (TNs) and Green AI, highlighting their synergistic potential to enhance both the inclusivity and sustainability of AI research. We argue that TNs are valuab
Externí odkaz:
http://arxiv.org/abs/2205.12961
Least squares support vector machines are a commonly used supervised learning method for nonlinear regression and classification. They can be implemented in either their primal or dual form. The latter requires solving a linear system, which can be a
Externí odkaz:
http://arxiv.org/abs/2110.13501