Solving the single-track train scheduling problem via Deep Reinforcement Learning
Autor: | Agasucci, Valerio, Grani, Giorgio, Lamorgese, Leonardo |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Zdroj: | Journal of Rail Transport Planning & Management, 26, p.100394 (2023) |
Druh dokumentu: | Working Paper |
DOI: | 10.1016/j.jrtpm.2023.100394 |
Popis: | Every day, railways experience disturbances and disruptions, both on the network and the fleet side, that affect the stability of rail traffic. Induced delays propagate through the network, which leads to a mismatch in demand and offer for goods and passengers, and, in turn, to a loss in service quality. In these cases, it is the duty of human traffic controllers, the so-called dispatchers, to do their best to minimize the impact on traffic. However, dispatchers inevitably have a limited depth of perception of the knock-on effect of their decisions, particularly how they affect areas of the network that are outside their direct control. In recent years, much work in Decision Science has been devoted to developing methods to solve the problem automatically and support the dispatchers in this challenging task. This paper investigates Machine Learning-based methods for tackling this problem, proposing two different Deep Q-Learning methods(Decentralized and Centralized). Numerical results show the superiority of these techniques with respect to the classical linear Q-Learning based on matrices. Moreover, the Centralized approach is compared with a MILP formulation showing interesting results. The experiments are inspired by data provided by a U.S. Class 1 railroad. Comment: Graph neural network added. Comparison with other methods added. 24 pages, 5 figures (1 b&w) |
Databáze: | arXiv |
Externí odkaz: |