Zobrazeno 1 - 10
of 367
pro vyhledávání: '"Couillet, Romain"'
Autor:
Leger, Victor, Couillet, Romain
This article considers a semi-supervised classification setting on a Gaussian mixture model, where the data is not labeled strictly as usual, but instead with uncertain labels. Our main aim is to compute the Bayes risk for this model. We compare the
Externí odkaz:
http://arxiv.org/abs/2403.17767
Autor:
Leger, Victor, Couillet, Romain
This article conducts a large dimensional study of a simple yet quite versatile classification model, encompassing at once multi-task and semi-supervised learning, and taking into account uncertain labeling. Using tools from random matrix theory, we
Externí odkaz:
http://arxiv.org/abs/2402.13646
The performance of spectral clustering relies on the fluctuations of the entries of the eigenvectors of a similarity matrix, which has been left uncharacterized until now. In this letter, it is shown that the signal $+$ noise structure of a general s
Externí odkaz:
http://arxiv.org/abs/2402.12302
This work presents a comprehensive understanding of the estimation of a planted low-rank signal from a general spiked tensor model near the computational threshold. Relying on standard tools from the theory of large random matrices, we characterize t
Externí odkaz:
http://arxiv.org/abs/2402.03169
Autor:
Nguyen, Minh-Toan, Couillet, Romain
The article considers semi-supervised multitask learning on a Gaussian mixture model (GMM). Using methods from statistical physics, we compute the asymptotic Bayes risk of each task in the regime of large datasets in high dimension, from which we ana
Externí odkaz:
http://arxiv.org/abs/2303.02048
Relying on random matrix theory (RMT), this paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise. Using the variational definition of the singular vectors and values of (Lim, 2005), we show that the analysis of the considered m
Externí odkaz:
http://arxiv.org/abs/2112.12348
The article proposes and theoretically analyses a \emph{computationally efficient} multi-task learning (MTL) extension of popular principal component analysis (PCA)-based supervised learning schemes \cite{barshan2011supervised,bair2006prediction}. Th
Externí odkaz:
http://arxiv.org/abs/2111.00924
This article proposes a distributed multi-task learning (MTL) algorithm based on supervised principal component analysis (SPCA) which is: (i) theoretically optimal for Gaussian mixtures, (ii) computationally cheap and scalable. Supporting experiments
Externí odkaz:
http://arxiv.org/abs/2110.04639