Adaptive Leader-Follower Formation Control and Obstacle Avoidance via Deep Reinforcement Learning
Autor: | Runhan Sun, Xiyao Ma, Yanlin Zhou, Xiaolin Li, George Pu, Fan Lu, Hsi-Yuan Chen |
---|---|
Rok vydání: | 2019 |
Předmět: |
0209 industrial biotechnology
Artificial neural network business.industry Control engineering 02 engineering and technology Modular design Convolutional neural network 020901 industrial engineering & automation Control theory Obstacle avoidance Convergence (routing) 0202 electrical engineering electronic engineering information engineering Reinforcement learning Robot 020201 artificial intelligence & image processing business |
Zdroj: | IROS |
DOI: | 10.1109/iros40897.2019.8967561 |
Popis: | We propose a deep reinforcement learning (DRL) methodology for the tracking, obstacle avoidance, and formation control of nonholonomic robots. By separating vision-based control into a perception module and a controller module, we can train a DRL agent without sophisticated physics or 3D modeling. In addition, the modular framework averts daunting retrains of an image-to-action end-to-end neural network, and provides flexibility in transferring the controller to different robots. First, we train a convolutional neural network (CNN) to accurately localize in an indoor setting with dynamic foreground/background. Then, we design a new DRL algorithm named Momentum Policy Gradient (MPG) for continuous control tasks and prove its convergence. We also show that MPG is robust at tracking varying leader movements and can naturally be extended to problems of formation control. Leveraging reward shaping, features such as collision and obstacle avoidance can be easily integrated into a DRL controller. |
Databáze: | OpenAIRE |
Externí odkaz: |