Zobrazeno 1 - 10
of 53
pro vyhledávání: '"Bengs, Viktor"'
Publikováno v:
Proceedings of the 41st International Conference on Machine Learning (ICML), 2024, pp. 22624--22642
Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty. Bayesian methods are commonly used to quantify both aleatoric and epistemic uncertainty, but alternative approaches, such as
Externí odkaz:
http://arxiv.org/abs/2402.09056
Reinforcement learning from human feedback (RLHF) is a variant of reinforcement learning (RL) that learns from human feedback instead of relying on an engineered reward function. Building on prior work on the related setting of preference-based reinf
Externí odkaz:
http://arxiv.org/abs/2312.14925
In the past couple of years, various approaches to representing and quantifying different types of predictive uncertainty in machine learning, notably in the setting of classification, have been proposed on the basis of second-order probability distr
Externí odkaz:
http://arxiv.org/abs/2312.00995
We consider the task of identifying the Copeland winner(s) in a dueling bandits problem with ternary feedback. This is an underexplored but practically relevant variant of the conventional dueling bandits problem, in which, in addition to strict pref
Externí odkaz:
http://arxiv.org/abs/2310.00750
Hyperparameter optimization (HPO) is concerned with the automated search for the most appropriate hyperparameter configuration (HPC) of a parameterized machine learning algorithm. A state-of-the-art HPO method is Hyperband, which, however, has its ow
Externí odkaz:
http://arxiv.org/abs/2302.00511
The Shapley value, which is arguably the most popular approach for assigning a meaningful contribution value to players in a cooperative game, has recently been used intensively in explainable artificial intelligence. Its meaningfulness is due to axi
Externí odkaz:
http://arxiv.org/abs/2302.00736
It is well known that accurate probabilistic predictors can be trained through empirical risk minimisation with proper scoring rules as loss functions. While such learners capture so-called aleatoric uncertainty of predictions, various machine learni
Externí odkaz:
http://arxiv.org/abs/2301.12736
Autor:
Brandt, Jasmin, Schede, Elias, Bengs, Viktor, Haddenhorst, Björn, Hüllermeier, Eyke, Tierney, Kevin
We study the algorithm configuration (AC) problem, in which one seeks to find an optimal parameter configuration of a given target algorithm in an automated way. Recently, there has been significant progress in designing AC approaches that satisfy st
Externí odkaz:
http://arxiv.org/abs/2212.00333
Multi-class classification methods that produce sets of probabilistic classifiers, such as ensemble learning methods, are able to model aleatoric and epistemic uncertainty. Aleatoric uncertainty is then typically quantified via the Bayes error, and e
Externí odkaz:
http://arxiv.org/abs/2205.10082
Uncertainty quantification has received increasing attention in machine learning in the recent past. In particular, a distinction between aleatoric and epistemic uncertainty has been found useful in this regard. The latter refers to the learner's (la
Externí odkaz:
http://arxiv.org/abs/2203.06102