A Multi-Depot Vehicle Routing Problem with Stochastic Road Capacity and Reduced Two-Stage Stochastic Integer Linear Programming Models for Rollout Algorithm
Autor: | Hsin-Vonn Seow, Lai Soon Lee, Stefan Pickl, Wadi Khalid Anuar |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
reinforcement learning
Computer science General Mathematics 0211 other engineering and technologies 02 engineering and technology 0502 economics and business Vehicle routing problem Computer Science (miscellaneous) QA1-939 Reinforcement learning Engineering (miscellaneous) Integer programming 050210 logistics & transportation 021103 operations research rollout algorithm 05 social sciences Dynamic programming Task (computing) matheuristic vehicle routing problem Markov decision process Routing (electronic design automation) two-stage stochastic programming approximate dynamic programming Algorithm Mathematics Optimal decision |
Zdroj: | Mathematics, Vol 9, Iss 1572, p 1572 (2021) Mathematics Volume 9 Issue 13 |
ISSN: | 2227-7390 |
Popis: | A matheuristic approach based on a reduced two-stage Stochastic Integer Linear Programming (SILP) model is presented. The proposed approach is suitable for obtaining a policy constructed dynamically on the go during the rollout algorithm. The rollout algorithm is part of the Approximate Dynamic Programming (ADP) lookahead solution approach for a Markov Decision Processes (MDP) framed Multi-Depot Dynamic Vehicle Routing Problem with Stochastic Road Capacity (MDDVRPSRC). First, a Deterministic Multi-Depot VRP with Road Capacity (D-MDVRPRC) is presented. Then an extension, MDVRPSRC-2S, is presented as an offline two-stage SILP model of the MDDVRPSRC. These models are validated using small simulated instances with CPLEX. Next, two reduced versions of the MDVRPSRC-2S model (MDVRPSRC-2S1 and MDVRPSRC-2S2) are derived. They have a specific task in routing: replenishment and delivering supplies. These reduced models are to be utilised interchangeably depending on the capacity of the vehicle, and repeatedly during the execution of rollout in reinforcement learning. As a result, it is shown that a base policy consisting of an exact optimal decision at each decision epoch can be obtained constructively through these reduced two-stage stochastic integer linear programming models. The results obtained from the resulting rollout policy with CPLEX execution during rollout are also presented to validate the reduced model and the matheuristic algorithm. This approach is proposed as a simple implementation when performing rollout for the lookahead approach in ADP. |
Databáze: | OpenAIRE |
Externí odkaz: |