Zobrazeno 1 - 10
of 216
pro vyhledávání: '"Mueller, Samuel A."'
While most ML models expect independent and identically distributed data, this assumption is often violated in real-world scenarios due to distribution shifts, resulting in the degradation of machine learning model performance. Until now, no tabular
Externí odkaz:
http://arxiv.org/abs/2411.10634
Traditionally, neural network training has been primarily viewed as an approximation of maximum likelihood estimation (MLE). This interpretation originated in a time when training for multiple epochs on small datasets was common and performance was d
Externí odkaz:
http://arxiv.org/abs/2410.01565
Learning curve extrapolation aims to predict model performance in later epochs of training, based on the performance in earlier epochs. In this work, we argue that, while the inherent uncertainty in the extrapolation of learning curves warrants a Bay
Externí odkaz:
http://arxiv.org/abs/2310.20447
We consider the problem of surrogate sufficient dimension reduction, that is, estimating the central subspace of a regression model, when the covariates are contaminated by measurement error. When no measurement error is present, a likelihood-based d
Externí odkaz:
http://arxiv.org/abs/2310.13858
In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible surrogate for Bayesian Optimization (BO). PFNs are neural processes that are trained to approximate the posterior predictive distribution (PPD) through in-context learning on any p
Externí odkaz:
http://arxiv.org/abs/2305.17535
As the field of automated machine learning (AutoML) advances, it becomes increasingly important to incorporate domain knowledge into these systems. We present an approach for doing so by harnessing the power of large language models (LLMs). Specifica
Externí odkaz:
http://arxiv.org/abs/2305.03403
Autor:
Wagner, Diane, Ferreira, Fabio, Stoll, Danny, Schirrmeister, Robin Tibor, Müller, Samuel, Hutter, Frank
Self-Supervised Learning (SSL) has become a very active area of Deep Learning research where it is heavily used as a pre-training method for classification and other tasks. However, the rapid pace of advancements in this area comes at a price: traini
Externí odkaz:
http://arxiv.org/abs/2207.07875
We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods. TabPFN performs in-contex
Externí odkaz:
http://arxiv.org/abs/2207.01848
Currently, it is hard to reap the benefits of deep learning for Bayesian methods, which allow the explicit specification of prior knowledge and accurately capture model uncertainty. We present Prior-Data Fitted Networks (PFNs). PFNs leverage in-conte
Externí odkaz:
http://arxiv.org/abs/2112.10510
Autor:
Lukács, Réka, Guillong, Marcel, Szepesi, János, Szymanowski, Dawid, Portnyagin, Maxim, Józsa, Sándor, Bachmann, Olivier, Petrelli, Maurizio, Müller, Samuel, Schiller, David, Fodor, László, Chelle-Michou, Cyril, Harangi, Szabolcs
Publikováno v:
In Gondwana Research June 2024 130:53-77