Zobrazeno 1 - 10
of 78
pro vyhledávání: '"Kadri, Hachem"'
$C^*$-algebra-valued kernels could pave the way for the next generation of kernel machines. To further our fundamental understanding of learning with $C^*$-algebraic kernels, we propose a new class of positive definite kernels based on the spectral t
Externí odkaz:
http://arxiv.org/abs/2405.17823
Publikováno v:
ICML 2024
Machine learning has a long collaborative tradition with several fields of mathematics, such as statistics, probability and linear algebra. We propose a new direction for machine learning research: $C^*$-algebraic ML $-$ a cross-fertilization between
Externí odkaz:
http://arxiv.org/abs/2402.02637
Autor:
Demni, Nizar, Kadri, Hachem
Random features have been introduced to scale up kernel methods via randomization techniques. In particular, random Fourier features and orthogonal random features were used to approximate the popular Gaussian kernel. Random Fourier features are buil
Externí odkaz:
http://arxiv.org/abs/2310.07370
The quantum separability problem consists in deciding whether a bipartite density matrix is entangled or separable. In this work, we propose a machine learning pipeline for finding approximate solutions for this NP-hard problem in large-scale scenari
Externí odkaz:
http://arxiv.org/abs/2306.09444
Publikováno v:
NeurIPS 2023
Reproducing kernel Hilbert $C^*$-module (RKHM) is a generalization of reproducing kernel Hilbert space (RKHS) by means of $C^*$-algebra, and the Perron-Frobenius operator is a linear operator related to the composition of functions. Combining these t
Externí odkaz:
http://arxiv.org/abs/2305.13588
Supervised learning in reproducing kernel Hilbert space (RKHS) and vector-valued RKHS (vvRKHS) has been investigated for more than 30 years. In this paper, we provide a new twist to this rich literature by generalizing supervised learning in RKHS and
Externí odkaz:
http://arxiv.org/abs/2210.11855
Publikováno v:
Proceedings of the 39th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022
We study the implicit regularization effects of deep learning in tensor factorization. While implicit regularization in deep matrix and 'shallow' tensor factorization via linear and certain type of non-linear neural networks promotes low-rank solutio
Externí odkaz:
http://arxiv.org/abs/2207.08942
We contribute to fulfil the long-lasting gap in the understanding of the spatial search with multiple marked vertices. The theoretical framework is that of discrete-time quantum walks (QW), \textit{i.e.} local unitary matrices that drive the evolutio
Externí odkaz:
http://arxiv.org/abs/2201.12937
Traditionally, kernel methods rely on the representer theorem which states that the solution to a learning problem is obtained as a linear combination of the data mapped into the reproducing kernel Hilbert space (RKHS). While elegant from theoretical
Externí odkaz:
http://arxiv.org/abs/2108.12199
Quantum machine learning algorithms could provide significant speed-ups over their classical counterparts; however, whether they could also achieve good generalization remains unclear. Recently, two quantum perceptron models which give a quadratic im
Externí odkaz:
http://arxiv.org/abs/2106.02496