Human-Feedback Shield Synthesis for Perceived Safety in Deep Reinforcement Learning

Autor: Iolanda Leite, Daniel Marta, Jana Tumova, Gaspar Isaac Melsión, Christian Pek
Rok vydání: 2022
Předmět:
Zdroj: IEEE Robotics and Automation Letters. 7:406-413
ISSN: 2377-3774
Popis: Despite the successes of deep reinforcement learning (RL), it is still challenging to obtain safe policies. Formal verifi- cation approaches ensure safety at all times, but usually overly restrict the agent’s behaviors, since they assume adversarial behavior of the environment. Instead of assuming adversarial behavior, we suggest to focus on perceived safety instead, i.e., policies that avoid undesired behaviors while having a desired level of conservativeness. To obtain policies that are perceived as safe, we propose a shield synthesis framework with two distinct loops: (1) an inner loop that trains policies with a set of actions that is constrained by shields whose conservativeness is parameterized, and (2) an outer loop that presents example rollouts of the policy to humans and collects their feedback to update the parameters of the shields in the inner loop. We demonstrate our approach on a RL benchmark of Lunar landing and a scenario in which a mobile robot navigates around humans. For the latter, we conducted two user studies to obtain policies that were perceived as safe. Our results indicate that our framework converges to policies that are perceived as safe, is robust against noisy feedback, and can query feedback for multiple policies at the same time. QC 20211215
Databáze: OpenAIRE