Zobrazeno 1 - 10
of 5 241
pro vyhledávání: '"Feofanov A"'
Publikováno v:
NeurIPS 2024 Workshop on Time Series in the Age of Large Models
Recently, there has been a growing interest in time series foundation models that generalize across different downstream tasks. A key to strong foundation models is a diverse pre-training dataset, which is particularly challenging to collect for time
Externí odkaz:
http://arxiv.org/abs/2412.06368
Foundation models, while highly effective, are often resource-intensive, requiring substantial inference time and memory. This paper addresses the challenge of making these models more accessible with limited computational resources by exploring dime
Externí odkaz:
http://arxiv.org/abs/2409.12264
Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Autor:
Ilbert, Romain, Tiomoko, Malik, Louart, Cosme, Odonnat, Ambroise, Feofanov, Vasilii, Palpanas, Themis, Redko, Ievgen
In this paper, we introduce a novel theoretical framework for multi-task regression, applying random matrix theory to provide precise performance estimations, under high-dimensional, non-Gaussian data distributions. We formulate a multi-task optimiza
Externí odkaz:
http://arxiv.org/abs/2406.10327
Leveraging the models' outputs, specifically the logits, is a common approach to estimating the test accuracy of a pre-trained neural network on out-of-distribution (OOD) samples without requiring access to the corresponding ground truth labels. Desp
Externí odkaz:
http://arxiv.org/abs/2405.18979
Autor:
Ilbert, Romain, Odonnat, Ambroise, Feofanov, Vasilii, Virmaux, Aladin, Paolo, Giuseppe, Palpanas, Themis, Redko, Ievgen
Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting. To better understand this phenomenon, we
Externí odkaz:
http://arxiv.org/abs/2402.10198
Estimating test accuracy without access to the ground-truth test labels under varying test environments is a challenging, yet extremely important problem in the safe deployment of machine learning algorithms. Existing works rely on the information fr
Externí odkaz:
http://arxiv.org/abs/2401.08909
Self-training is a well-known approach for semi-supervised learning. It consists of iteratively assigning pseudo-labels to unlabeled data for which the model is confident and treating them as labeled examples. For neural networks, softmax prediction
Externí odkaz:
http://arxiv.org/abs/2310.14814
Publikováno v:
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:10008-10033, 2023
We propose a theoretical framework to analyze semi-supervised classification under the low density separation assumption in a high-dimensional regime. In particular, we introduce QLDS, a linear classification model, where the low density separation a
Externí odkaz:
http://arxiv.org/abs/2310.13434
Autor:
Sheinberg, Esti
Publikováno v:
Notes, 1999 Dec 01. 56(2), 422-424.
Externí odkaz:
https://www.jstor.org/stable/900029
Autor:
Fanning, David
Publikováno v:
Music & Letters, 1999 Aug 01. 80(3), 489-491.
Externí odkaz:
https://www.jstor.org/stable/855056