Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Viering, Tom"'
To reach high performance with deep learning, hyperparameter optimization (HPO) is essential. This process is usually time-consuming due to costly evaluations of neural networks. Early discarding techniques limit the resources granted to unpromising
Externí odkaz:
http://arxiv.org/abs/2404.04111
Autor:
Loog, Marco, Viering, Tom
Plotting a learner's generalization performance against the training set size results in a so-called learning curve. This tool, providing insight in the behavior of the learner, is also practically valuable for model selection, predicting the effect
Externí odkaz:
http://arxiv.org/abs/2211.14061
Autor:
Viering, Tom, Loog, Marco
Learning curves provide insight into the dependence of a learner's generalization performance on the training set size. This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational
Externí odkaz:
http://arxiv.org/abs/2103.10948
In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners. Given a fixed training sample size $n$, such curves show the risk of a learner as a function of som
Externí odkaz:
http://arxiv.org/abs/2004.04328
Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consis
Externí odkaz:
http://arxiv.org/abs/1911.11030
Recently many methods have been introduced to explain CNN decisions. However, it has been shown that some methods can be sensitive to manipulation of the input. We continue this line of work and investigate the explanation method GradCAM. Instead of
Externí odkaz:
http://arxiv.org/abs/1907.10901
Plotting a learner's average performance against the number of training samples results in a learning curve. Studying such curves on one or more data sets is a way to get to a better understanding of the generalization properties of this learner. The
Externí odkaz:
http://arxiv.org/abs/1907.05476
Manifold regularization is a commonly used technique in semi-supervised learning. It enforces the classification rule to be smooth with respect to the data-manifold. Here, we derive sample complexity bounds based on pseudo-dimension for models that a
Externí odkaz:
http://arxiv.org/abs/1906.06100
Active learning algorithms propose which unlabeled objects should be queried for their labels to improve a predictive model the most. We study active learners that minimize generalization bounds and uncover relationships between these bounds that lea
Externí odkaz:
http://arxiv.org/abs/1706.02645
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.