Automated eco-driving in urban scenarios using deep reinforcement learning
Autor: | Markus Eisenbarth, Lucas Koch, Marius Wegener, Jakob Andert |
---|---|
Rok vydání: | 2021 |
Předmět: |
050210 logistics & transportation
business.product_category Computer science 05 social sciences Real-time computing Control (management) Probabilistic logic Transportation Energy consumption 010501 environmental sciences Management Science and Operations Research 01 natural sciences Travel time 0502 economics and business Automotive Engineering Electric vehicle Reinforcement learning business Advice (complexity) Energy (signal processing) 0105 earth and related environmental sciences Civil and Structural Engineering |
Zdroj: | Transportation Research Part C: Emerging Technologies. 126:102967 |
ISSN: | 0968-090X |
DOI: | 10.1016/j.trc.2021.102967 |
Popis: | Urban settings are challenging environments to implement eco-driving strategies for automated vehicles. It is often assumed that sufficient information on the preceding vehicle pulk is available to accurately predict the traffic situation. Because vehicle-to-vehicle communication was introduced only recently, this assumption will not be valid until a sufficiently high penetration of the vehicle fleet has been reached. Thus, in the present study, we employed Reinforcement Learning (RL) to develop eco-driving strategies for cases where little data on the traffic situation are available. An A-segment electric vehicle was simulated using detailed efficiency models to accurately determine its energy-saving potential. A probabilistic traffic environment featuring signalized urban roads and multiple preceding vehicles was integrated into the simulation model. Only information on the traffic light timing and minimal sensor data were provided to the control algorithm. A twin-delayed deep deterministic policy gradient (TD3) agent was implemented and trained to control the vehicle efficiently and safely in this environment. Energy savings of up to 19% compared with a simulated human driver and up to 11% compared with a fine-tuned Green Light Optimal Speed Advice (GLOSA) algorithm were determined in a probabilistic traffic scenario reflecting real-world conditions. Overall, the RL agents showed a better travel time and energy consumption trade-off than the GLOSA reference. |
Databáze: | OpenAIRE |
Externí odkaz: |