End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks
Autor: | Richard Cheng, Joel W. Burdick, Gábor Orosz, Richard M. Murray |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning 0209 industrial biotechnology Computer science Process (engineering) Machine Learning (stat.ML) Control engineering Systems and Control (eess.SY) 02 engineering and technology General Medicine Machine Learning (cs.LG) Inverted pendulum symbols.namesake 020901 industrial engineering & automation End-to-end principle Statistics - Machine Learning Control theory FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering symbols Computer Science - Systems and Control Reinforcement learning 020201 artificial intelligence & image processing Gaussian process |
Zdroj: | AAAI |
ISSN: | 2374-3468 2159-5399 |
DOI: | 10.1609/aaai.v33i01.33013387 |
Popis: | Reinforcement Learning (RL) algorithms have found limited success beyond simulated applications, and one main reason is the absence of safety guarantees during the learning process. Real world systems would realistically fail or break before an optimal controller can be learned. To address this issue, we propose a controller architecture that combines (1) a model-free RL-based controller with (2) model-based controllers utilizing control barrier functions (CBFs) and (3) on-line learning of the unknown system dynamics, in order to ensure safety during learning. Our general framework leverages the success of RL algorithms to learn high-performance controllers, while the CBF-based controllers both guarantee safety and guide the learning process by constraining the set of explorable polices. We utilize Gaussian Processes (GPs) to model the system dynamics and its uncertainties. Our novel controller synthesis algorithm, RL-CBF, guarantees safety with high probability during the learning process, regardless of the RL algorithm used, and demonstrates greater policy exploration efficiency. We test our algorithm on (1) control of an inverted pendulum and (2) autonomous car-following with wireless vehicle-to-vehicle communication, and show that our algorithm attains much greater sample efficiency in learning than other state-of-the-art algorithms and maintains safety during the entire learning process. Published in AAAI 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |