Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Peng, Baiyu"'
Autor:
Peng, Baiyu, Billard, Aude
Planning for a wide range of real-world robotic tasks necessitates to know and write all constraints. However, instances exist where these constraints are either unknown or challenging to specify accurately. A possible solution is to infer the unknow
Externí odkaz:
http://arxiv.org/abs/2408.01622
Autor:
Peng, Baiyu, Billard, Aude
Planning for a wide range of real-world tasks necessitates to know and write all constraints. However, instances exist where these constraints are either unknown or challenging to specify accurately. A possible solution is to infer the unknown constr
Externí odkaz:
http://arxiv.org/abs/2407.16485
Model-based Chance-Constrained Reinforcement Learning via Separated Proportional-Integral Lagrangian
Autor:
Peng, Baiyu, Duan, Jingliang, Chen, Jianyu, Li, Shengbo Eben, Xie, Genjin, Zhang, Congsheng, Guan, Yang, Mu, Yao, Sun, Enxin
Safety is essential for reinforcement learning (RL) applied in the real world. Adding chance constraints (or probabilistic constraints) is a suitable way to enhance RL safety under uncertainty. Existing chance-constrained RL methods like the penalty
Externí odkaz:
http://arxiv.org/abs/2108.11623
Safety is essential for reinforcement learning (RL) applied in real-world tasks like autonomous driving. Chance constraints which guarantee the satisfaction of state constraints at a high probability are suitable to represent the requirements in real
Externí odkaz:
http://arxiv.org/abs/2102.08539
Safety is essential for reinforcement learning (RL) applied in real-world situations. Chance constraints are suitable to represent the safety requirements in stochastic systems. Previous chance-constrained RL methods usually have a low convergence ra
Externí odkaz:
http://arxiv.org/abs/2012.10716
Reinforcement learning (RL) methods often rely on massive exploration data to search optimal policies, and suffer from poor sampling efficiency. This paper presents a mixed reinforcement learning (mixed RL) algorithm by simultaneously using dual repr
Externí odkaz:
http://arxiv.org/abs/2003.00848
Model-Based Chance-Constrained Reinforcement Learning via Separated Proportional-Integral Lagrangian
Autor:
Peng, Baiyu, Duan, Jingliang, Chen, Jianyu, Li, Shengbo Eben, Xie, Genjin, Zhang, Congsheng, Guan, Yang, Mu, Yao, Sun, Enxin
Publikováno v:
IEEE Transactions on Neural Networks and Learning Systems; January 2024, Vol. 35 Issue: 1 p466-478, 13p
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
2021 60th IEEE Conference on Decision and Control (CDC).
Safety is essential for reinforcement learning (RL) applied in real-world situations. Chance constraints are suitable to represent the safety requirements in stochastic systems. Previous chance-constrained RL methods usually have a low convergence ra
Publikováno v:
Automotive Innovation; Aug2021, Vol. 4 Issue 3, p328-337, 10p