Abstrakt: |
Fog computing is seen as a key enabler to meet the stringent requirements of industrial Internet of Things (IIoT). Specifically, lower latency and IIoT devices’ energy consumption can be achieved by offloading computation-intensive tasks to fog access points (F-APs). However, traditional computation offloading optimization methods often possess high complexity, making them inapplicable in practical IIoT. To overcome this issue, this article proposes a deep reinforcement learning (DRL) based approach to minimize long-term system energy consumption in a computation offloading scenario with multiple IIoT devices and multiple F-APs. The proposal features a multi-agent setting to deal with the curse of dimensionality of the action space by creating a DRL model for each IIoT device, which identifies its serving F-AP based on network and device states. After F-AP selection is finished, a low complexity greedy algorithm is executed at each F-AP under a computation capability constraint to determine which offloading requests are further forwarded to the cloud. By conducting offline training in the cloud and then making decisions online, iterative online optimization procedures are avoided and, hence, F-APs can quickly adjust F-AP selection for each device with trained DRL models. Via simulation, the impact of batch size on system performance is demonstrated and the proposed DRL-based approach shows competitive performance compared to various baselines including exhaustive search and genetic algorithm based approaches. In addition, the generalization capability of the proposal is verified as well. [ABSTRACT FROM AUTHOR] |