Reinforcement Learning Approach to Design Practical Adaptive Control for a Small-Scale Intelligent Vehicle

Autor: Youchang Sun, Jiaxi Li, Haitao Bai, Bo Hu, Shuang Li, Xiaoyu Yang, Jie Yang
Jazyk: angličtina
Rok vydání: 2019
Předmět:
Zdroj: Symmetry
Volume 11
Issue 9
Symmetry, Vol 11, Iss 9, p 1139 (2019)
ISSN: 2073-8994
DOI: 10.3390/sym11091139
Popis: Reinforcement learning (RL) based techniques have been employed for the tracking and adaptive cruise control of a small-scale vehicle with the aim to transfer the obtained knowledge to a full-scale intelligent vehicle in the near future. Unlike most other control techniques, the purpose of this study is to seek a practical method that enables the vehicle, in the real environment and in real time, to learn the control behavior on its own while adapting to the changing circumstances. In this context, it is necessary to design an algorithm that symmetrically considers both time efficiency and accuracy. Meanwhile, in order to realize adaptive cruise control specifically, a set of symmetrical control actions consisting of steering angle and vehicle speed needs to be optimized simultaneously. In this paper, firstly, the experimental setup of the small-scale intelligent vehicle is introduced. Subsequently, three model-free RL algorithm are conducted to develop and finally form the strategy to keep the vehicle within its lanes at constant and top velocity. Furthermore, a model-based RL strategy is compared that incorporates learning from real experience and planning from simulated experience. Finally, a Q-learning based adaptive cruise control strategy is intermixed to the existing tracking control architecture to allow the vehicle slow-down in the curve and accelerate on straightaways. The experimental results show that the Q-learning and Sarsa (&lambda
) algorithms can achieve a better tracking behavior than the conventional Sarsa, and Q-learning outperform Sarsa (&lambda
) in terms of computational complexity. The Dyna-Q method performs similarly with the Sarsa (&lambda
) algorithms, but with a significant reduction of computational time. Compared with a fine-tuned proportion integration differentiation (PID) controller, the good-balanced Q-learning is seen to perform better and it can also be easily applied to control problems with over one control actions.
Databáze: OpenAIRE