Autor: |
Noriega, Roberto1 (AUTHOR) noriega@ualberta.ca, Pourrahimian, Yashar1 (AUTHOR) |
Předmět: |
|
Zdroj: |
International Journal of Mining, Reclamation & Environment. Jul2024, Vol. 38 Issue 6, p442-459. 18p. |
Abstrakt: |
The open-pit production system is a highly dynamic and uncertain environment with complex interactions between haulage and loading equipment on a shared road network. One of the key decisions in open-pit short-term planning is the allocation sequence of shovels to mining faces to meet the production targets established by long- and medium-term strategic plans. Deep Reinforcement Learning(DRL) techniques are commonly used in dynamic production environments. In this approach, an agent is trained on a simulation of the production system to learn the optimal decisions based on the system's current state. This paper proposes a DRL approach based on the Deep Q-Learning algorithm to obtain a robust shovel allocation plan for open-pit short-term planning. First, a discrete-event simulation of the mining production system incorporating trucks, shovels, crushers, waste dumps, and the road network is created. This simulation models the uncertainties of each component's operating cycle based on historical activity records, and it is used to train the DRL agent. The goal is to learn a robust shovel allocation strategy for the next production quarter, 3 months, to meet the tonnes per hour (TPH) production target to be delivered to the crusher feeds by interacting with the production simulator. The framework is tested in an iron ore open-pit mine case study where the shovel allocation agent successfully learns a strategy that consistently delivers the desired production target. [ABSTRACT FROM AUTHOR] |
Databáze: |
GreenFILE |
Externí odkaz: |
|