Adaptive Suboptimal Output-Feedback Control for Linear Systems Using Integral Reinforcement Learning
Autor: | Lemei M. Zhu, Gan Oon Peen, Frank L. Lewis, Baozeng Yue, Hamidreza Modares |
---|---|
Rok vydání: | 2015 |
Předmět: | |
Zdroj: | IEEE Transactions on Control Systems Technology. 23:264-273 |
ISSN: | 2374-0159 1063-6536 |
DOI: | 10.1109/tcst.2014.2322778 |
Popis: | Reinforcement learning (RL) techniques have been successfully used to find optimal state-feedback controllers for continuous-time (CT) systems. However, in most real-world control applications, it is not practical to measure the system states and it is desirable to design output-feedback controllers. This paper develops an online learning algorithm based on the integral RL (IRL) technique to find a suboptimal output-feedback controller for partially unknown CT linear systems. The proposed IRL-based algorithm solves an IRL Bellman equation in each iteration online in real time to evaluate an output-feedback policy and updates the output-feedback gain using the information given by the evaluated policy. The knowledge of the system drift dynamics is not required by the proposed method. An adaptive observer is used to provide the knowledge of the full states for the IRL Bellman equation during learning. However, the observer is not needed after the learning process is finished. The convergence of the proposed algorithm to a suboptimal output-feedback solution and the performance of the proposed method are verified through simulation on two real-world applications, namely, the X-Y table and the F-16 aircraft. |
Databáze: | OpenAIRE |
Externí odkaz: |