Cooperative Deep Reinforcement Learning for Large-Scale Traffic Grid Signal Control
Autor: | Feng Bao, Jie Wang, Alex G. B. Jin, Tian Tan, Qionghai Dai, Yue Deng |
---|---|
Rok vydání: | 2020 |
Předmět: |
Computer science
Distributed computing Grid Computer Science Applications Human-Computer Interaction Vehicle dynamics Traffic congestion Control and Systems Engineering Task analysis Reinforcement learning Electrical and Electronic Engineering Intelligent transportation system Software Information Systems |
Zdroj: | IEEE Transactions on Cybernetics. 50:2687-2700 |
ISSN: | 2168-2275 2168-2267 |
Popis: | Exploiting reinforcement learning (RL) for traffic congestion reduction is a frontier topic in intelligent transportation research. The difficulty in this problem stems from the inability of the RL agent simultaneously monitoring multiple signal lights when taking into account complicated traffic dynamics in different regions of a traffic system. Such challenge is even more outstanding when forming control decisions on a large-scale traffic grid, where the RL action space grows exponentially with the number of intersections within the traffic grid. In this paper, we tackle such a problem by proposing a cooperative deep reinforcement learning (Coder) framework. The intuition behind Coder is to decompose the original difficult RL task as a number of subproblems with relatively easy RL goals. Accordingly, we implement Coder with multiple regional agents and a centralized global agent. Each regional agent learns its own RL policy and value functions over a small region with limited actions. Then, the centralized global agent hierarchically aggregates RL achievements from different regional agents and forms the final ${Q}$ -function over the entire large-scale traffic grid. The experimental investigations demonstrate that the proposed Coder could reduce on average 30% congestions in terms of the number of waiting vehicles during high density traffic flows in simulations. |
Databáze: | OpenAIRE |
Externí odkaz: |