Zobrazeno 1 - 10
of 43
pro vyhledávání: '"Biedenkapp, André"'
A World Model is a compressed spatial and temporal representation of a real world environment that allows one to train an agent or execute planning methods. However, world models are typically trained on observations from the real world environment,
Externí odkaz:
http://arxiv.org/abs/2409.14084
High-dimensional action spaces remain a challenge for dynamic algorithm configuration (DAC). Interdependencies and varying importance between action dimensions are further known key characteristics of DAC problems. We argue that these Coupled Action
Externí odkaz:
http://arxiv.org/abs/2407.05789
In this work, we address the challenge of zero-shot generalization (ZSG) in Reinforcement Learning (RL), where agents must adapt to entirely novel environments without additional training. We argue that understanding and utilizing contextual cues, su
Externí odkaz:
http://arxiv.org/abs/2404.09521
Zero-shot generalization (ZSG) to unseen dynamics is a major challenge for creating generally capable embodied agents. To address the broader challenge, we start with the simpler setting of contextual reinforcement learning (cRL), assuming observabil
Externí odkaz:
http://arxiv.org/abs/2403.10967
We introduce Hierarchical Transformers for Meta-Reinforcement Learning (HTrMRL), a powerful online meta-reinforcement learning approach. HTrMRL aims to address the challenge of enabling reinforcement learning agents to perform effectively in previous
Externí odkaz:
http://arxiv.org/abs/2402.06402
Automated Machine Learning (AutoML) is used more than ever before to support users in determining efficient hyperparameters, neural architectures, or even full machine learning pipelines. However, users tend to mistrust the optimization process and i
Externí odkaz:
http://arxiv.org/abs/2206.03493
Autor:
Adriaensen, Steven, Biedenkapp, André, Shala, Gresa, Awad, Noor, Eimer, Theresa, Lindauer, Marius, Hutter, Frank
The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning paramet
Externí odkaz:
http://arxiv.org/abs/2205.13881
Autor:
Benjamins, Carolin, Eimer, Theresa, Schubert, Frederik, Mohan, Aditya, Döhler, Sebastian, Biedenkapp, André, Rosenhahn, Bodo, Hutter, Frank, Lindauer, Marius
While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model
Externí odkaz:
http://arxiv.org/abs/2202.04500
It has long been observed that the performance of evolutionary algorithms and other randomized search heuristics can benefit from a non-static choice of the parameters that steer their optimization behavior. Mechanisms that identify suitable configur
Externí odkaz:
http://arxiv.org/abs/2202.03259
Autor:
Parker-Holder, Jack, Rajan, Raghu, Song, Xingyou, Biedenkapp, André, Miao, Yingjie, Eimer, Theresa, Zhang, Baohe, Nguyen, Vu, Calandra, Roberto, Faust, Aleksandra, Hutter, Frank, Lindauer, Marius
Publikováno v:
Journal of Artificial Intelligence Research 74 (2022) 517-568
The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to
Externí odkaz:
http://arxiv.org/abs/2201.03916