Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Otto, Fabian"'
Existing off-policy reinforcement learning algorithms often rely on an explicit state-action-value function representation, which can be problematic in high-dimensional action spaces due to the curse of dimensionality. This reliance results in data i
Externí odkaz:
http://arxiv.org/abs/2403.04453
Autor:
Li, Ge, Zhou, Hongyi, Roth, Dominik, Thilges, Serge, Otto, Fabian, Lioutikov, Rudolf, Neumann, Gerhard
Current advancements in reinforcement learning (RL) have predominantly focused on learning step-based policies that generate actions for each perceived state. While these methods efficiently leverage step information from environmental interaction, t
Externí odkaz:
http://arxiv.org/abs/2401.11437
We introduce a novel deep reinforcement learning (RL) approach called Movement Primitive-based Planning Policy (MP3). By integrating movement primitives (MPs) into the deep RL framework, MP3 enables the generation of smooth trajectories throughout th
Externí odkaz:
http://arxiv.org/abs/2306.12729
Learning self-supervised representations using reconstruction or contrastive losses improves performance and sample complexity of image-based and multimodal reinforcement learning (RL). Here, different self-supervised loss functions have distinct adv
Externí odkaz:
http://arxiv.org/abs/2302.05342
\Episode-based reinforcement learning (ERL) algorithms treat reinforcement learning (RL) as a black-box optimization problem where we learn to select a parameter vector of a controller, often represented as a movement primitive, for a given task desc
Externí odkaz:
http://arxiv.org/abs/2210.09622
Movement Primitives (MPs) are a well-known concept to represent and generate modular trajectories. MPs can be broadly categorized into two types: (a) dynamics-based approaches that generate smooth trajectories from any initial state, e. g., Dynamic M
Externí odkaz:
http://arxiv.org/abs/2210.01531
Trust region methods are a popular tool in reinforcement learning as they yield robust policy updates in continuous and discrete action spaces. However, enforcing such trust regions in deep reinforcement learning is difficult. Hence, many approaches,
Externí odkaz:
http://arxiv.org/abs/2101.09207
Autor:
Otto, Fabian.
Halle, Wittenberg, Univ., Diss., 2002.
Externí odkaz:
http://sundoc.bibliothek.uni-halle.de/diss-online/02/02H154/prom.pdf
http://deposit.ddb.de/cgi-bin/dokserv?idn=965437922
http://deposit.ddb.de/cgi-bin/dokserv?idn=965437922
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Combining inputs from multiple sensor modalities effectively in reinforcement learning (RL) is an open problem. While many self-supervised representation learning approaches exist to improve performance and sample complexity for image-based RL, they
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b56633254d9451b4e4bf5afe452f2d94