Mixed-Autonomy Traffic Control with Proximal Policy Optimization
Autor: | Haoran Wei, Keith Decker, Lena Mashayekhy, Xuanzhang Liu |
---|---|
Rok vydání: | 2019 |
Předmět: |
050210 logistics & transportation
0209 industrial biotechnology Mathematical optimization Computer science 05 social sciences Control (management) Throughput 02 engineering and technology Vehicle dynamics 020901 industrial engineering & automation 0502 economics and business Traffic optimization Reinforcement learning Distributed learning Collision avoidance Traffic simulator |
Zdroj: | VNC |
DOI: | 10.1109/vnc48660.2019.9062809 |
Popis: | This work studies mixed-autonomy traffic optimization at a network level with Deep Reinforcement Learning (DRL). In mixed-autonomy traffic, a mixture of connected autonomous vehicles (CAVs) and human driving vehicles is present on the roads at the same time. We hypothesize that controlling distributed CAVs at a network level can outperform the individually controlled CAVs. Our goal is to improve traffic fluidity in terms of the vehicle's average velocity and collision avoidance. We propose three distributed learning control policies for CAVs in mixed-autonomy traffic using Proximal Policy Optimization (PPO), a policy gradient DRL method. We conduct the experiments with different traffic settings and CAV penetration rates on the Flow framework, a new open-source microscopic traffic simulator. The experiments show that network-level RL policies for controlling CAVs outperform the individual-level RL policies in terms of the total rewards and the average velocity. |
Databáze: | OpenAIRE |
Externí odkaz: |