Autor: |
Viitala, Ari, Boney, Rinu, Zhao, Yi, Ilin, Alexander, Kannala, Juho |
Rok vydání: |
2020 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
We present Learning to Drive (L2D), a low-cost benchmark for real-world reinforcement learning (RL). L2D involves a simple and reproducible experimental setup where an RL agent has to learn to drive a Donkey car around three miniature tracks, given only monocular image observations and speed of the car. The agent has to learn to drive from disengagements, which occurs when it drives off the track. We present and open-source our training pipeline, which makes it straightforward to apply any existing RL algorithm to the task of autonomous driving with a Donkey car. We test imitation learning, state-of-the-art model-free, and model-based algorithms on the proposed L2D benchmark. Our results show that existing RL algorithms can learn to drive the car from scratch in less than five minutes of interaction. We demonstrate that RL algorithms can learn from sparse and noisy disengagement to drive even faster than imitation learning and a human operator. |
Databáze: |
arXiv |
Externí odkaz: |
|