Popis: |
Multi-agent deep reinforcement learning (MDRL) is a popular choice for multi-intersection traffic signal control, generating decentralized cooperative traffic signal strategies in specific traffic networks. Despite its widespread use, current MDRL algorithms have certain limitations. Firstly, the specific multi-agent settings impede the transferability and generalization of traffic signal policies to different traffic networks. Secondly, existing MDRL algorithms struggle to adapt to a varying number of vehicles crossing the traffic networks. This paper introduces a novel Cooperative Multi-Agent Deep Q-Network (CMDQN) for traffic signal control to alleviate traffic congestion. We have considered innovative features such as signal state at the preceding junction, the distance between junctions, visual features, and average speed. Our CMDQN applies a Decentralized Multi-Agent Network (DMN), employing a Markov Game abstraction for collaboration and state information sharing between agents to reduce waiting times. Our work employs Reinforcement Learning (RL) and a Deep Q-Network (DQN) for adaptive traffic signal control, leveraging Deep Computer Vision for real-time traffic density information. We also propose an intersection and a network-wide reward function to evaluate performance and optimize traffic flow. The developed system was evaluated through both synthetic and real-world experiments. The synthetic network is based on the Simulation of Urban Mobility (SUMO) traffic simulator, and the real-world network employed traffic data collected from installed cameras at actual traffic signals. Our results demonstrated improved performance across several key metrics when compared to the baseline model, reducing waiting times and improving traffic flow. This research presents a promising approach for cooperative traffic signal control, significantly contributing to the efforts to enhance traffic management systems. |