Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Feofanov, Vasilii"'
Foundation models, while highly effective, are often resource-intensive, requiring substantial inference time and memory. This paper addresses the challenge of making these models more accessible with limited computational resources by exploring dime
Externí odkaz:
http://arxiv.org/abs/2409.12264
Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Autor:
Ilbert, Romain, Tiomoko, Malik, Louart, Cosme, Odonnat, Ambroise, Feofanov, Vasilii, Palpanas, Themis, Redko, Ievgen
In this paper, we introduce a novel theoretical framework for multi-task regression, applying random matrix theory to provide precise performance estimations, under high-dimensional, non-Gaussian data distributions. We formulate a multi-task optimiza
Externí odkaz:
http://arxiv.org/abs/2406.10327
Leveraging the models' outputs, specifically the logits, is a common approach to estimating the test accuracy of a pre-trained neural network on out-of-distribution (OOD) samples without requiring access to the corresponding ground truth labels. Desp
Externí odkaz:
http://arxiv.org/abs/2405.18979
Autor:
Ilbert, Romain, Odonnat, Ambroise, Feofanov, Vasilii, Virmaux, Aladin, Paolo, Giuseppe, Palpanas, Themis, Redko, Ievgen
Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting. To better understand this phenomenon, we
Externí odkaz:
http://arxiv.org/abs/2402.10198
Estimating test accuracy without access to the ground-truth test labels under varying test environments is a challenging, yet extremely important problem in the safe deployment of machine learning algorithms. Existing works rely on the information fr
Externí odkaz:
http://arxiv.org/abs/2401.08909
Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples. For neural networks, softmax prediction
Externí odkaz:
http://arxiv.org/abs/2310.14814
Publikováno v:
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:10008-10033, 2023
We propose a theoretical framework to analyze semi-supervised classification under the low density separation assumption in a high-dimensional regime. In particular, we introduce QLDS, a linear classification model, where the low density separation a
Externí odkaz:
http://arxiv.org/abs/2310.13434
Autor:
Amini, Massih-Reza, Feofanov, Vasilii, Pauletto, Loic, Hadjadj, Lies, Devijver, Emilie, Maximov, Yury
Semi-supervised algorithms aim to learn prediction functions from a small set of labeled observations and a large set of unlabeled observations. Because this framework is relevant in many applications, they have received a lot of interest in both aca
Externí odkaz:
http://arxiv.org/abs/2202.12040
Self-learning is a classical approach for learning with both labeled and unlabeled observations which consists in giving pseudo-labels to unlabeled training instances with a confidence score over a predetermined threshold. At the same time, the pseud
Externí odkaz:
http://arxiv.org/abs/2109.14422
In this paper, we propose a new wrapper feature selection approach with partially labeled training examples where unlabeled observations are pseudo-labeled using the predictions of an initial classifier trained on the labeled training set. The wrappe
Externí odkaz:
http://arxiv.org/abs/1911.04841