Constrained Exploration and Recovery from Experience Shaping

Autor: Pham, Tu-Hoa, De Magistris, Giovanni, Agravante, Don Joven, Chaudhury, Subhajit, Munawar, Asim, Tachibana, Ryuki
Rok vydání: 2018
Předmět:
Druh dokumentu: Working Paper
Popis: We consider the problem of reinforcement learning under safety requirements, in which an agent is trained to complete a given task, typically formalized as the maximization of a reward signal over time, while concurrently avoiding undesirable actions or states, associated to lower rewards, or penalties. The construction and balancing of different reward components can be difficult in the presence of multiple objectives, yet is crucial for producing a satisfying policy. For example, in reaching a target while avoiding obstacles, low collision penalties can lead to reckless movements while high penalties can discourage exploration. To circumvent this limitation, we examine the effect of past actions in terms of safety to estimate which are acceptable or should be avoided in the future. We then actively reshape the action space of the agent during reinforcement learning, so that reward-driven exploration is constrained within safety limits. We propose an algorithm enabling the learning of such safety constraints in parallel with reinforcement learning and demonstrate its effectiveness in terms of both task completion and training time.
Comment: Code: https://github.com/IBM/constrained-rl
Databáze: arXiv