Autor: |
Kanellopoulos, Aris, Fotiadis, Filippos, Sun, Chuangchuang, Xu, Zhe, Vamvoudakis, Kyriakos G., Topcu, Ufuk, Dixon, Warren E. |
Rok vydání: |
2021 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
DOI: |
10.1109/CDC45484.2021.9683309 |
Popis: |
In this paper, we develop safe reinforcement-learning-based controllers for systems tasked with accomplishing complex missions that can be expressed as linear temporal logic specifications, similar to those required by search-and-rescue missions. We decompose the original mission into a sequence of tracking sub-problems under safety constraints. We impose the safety conditions by utilizing barrier functions to map the constrained optimal tracking problem in the physical space to an unconstrained one in the transformed space. Furthermore, we develop policies that intermittently update the control signal to solve the tracking sub-problems with reduced burden in the communication and computation resources. Subsequently, an actor-critic algorithm is utilized to solve the underlying Hamilton-Jacobi-Bellman equations. Finally, we support our proposed framework with stability proofs and showcase its efficacy via simulation results. |
Databáze: |
arXiv |
Externí odkaz: |
|