Hierarchical Program-Triggered Reinforcement Learning Agents For Automated Driving
Autor: | Harshit Soora, Pallab Dasgupta, Briti Gangopadhyay |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Hierarchy (mathematics) Computer science business.industry Computer Science - Artificial Intelligence Mechanical Engineering Deep learning Computer Science - Neural and Evolutionary Computing Certification Bottleneck Computer Science Applications Machine Learning (cs.LG) Task (computing) Artificial Intelligence (cs.AI) Automotive Engineering Reinforcement learning Verifiable secret sharing Artificial intelligence Neural and Evolutionary Computing (cs.NE) business Interpretability |
DOI: | 10.48550/arxiv.2103.13861 |
Popis: | Recent advances in Reinforcement Learning (RL) combined with Deep Learning (DL) have demonstrated impressive performance in complex tasks, including autonomous driving. The use of RL agents in autonomous driving leads to a smooth human-like driving experience, but the limited interpretability of Deep Reinforcement Learning (DRL) creates a verification and certification bottleneck. Instead of relying on RL agents to learn complex tasks, we propose HPRL - Hierarchical Program-triggered Reinforcement Learning, which uses a hierarchy consisting of a structured program along with multiple RL agents, each trained to perform a relatively simple task. The focus of verification shifts to the master program under simple guarantees from the RL agents, leading to a significantly more interpretable and verifiable implementation as compared to a complex RL agent. The evaluation of the framework is demonstrated on different driving tasks, and NHTSA precrash scenarios using CARLA, an open-source dynamic urban simulation environment. Comment: The paper is under consideration in Transactions on Intelligent Transportation Systems |
Databáze: | OpenAIRE |
Externí odkaz: |