Lyapunov-based reinforcement learning for distributed control with stability guarantee
Autor: | Yao, Jingshi, Han, Minghao, Yin, Xunyuan |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | In this paper, we propose a Lyapunov-based reinforcement learning method for distributed control of nonlinear systems comprising interacting subsystems with guaranteed closed-loop stability. Specifically, we conduct a detailed stability analysis and derive sufficient conditions that ensure closed-loop stability under a model-free distributed control scheme based on the Lyapunov theorem. The Lyapunov-based conditions are leveraged to guide the design of local reinforcement learning control policies for each subsystem. The local controllers only exchange scalar-valued information during the training phase, yet they do not need to communicate once the training is completed and the controllers are implemented online. The effectiveness and performance of the proposed method are evaluated using a benchmark chemical process that contains two reactors and one separator. Comment: 28 pages, 10 figures, journal, Computers and Chemical Engineering |
Databáze: | arXiv |
Externí odkaz: |