Task Offloading and Resource Scheduling in Hybrid Edge-Cloud Networks
Autor: | Shichao Zhu, Lin Gui, Qi Zhang, Xiupu Lang |
---|---|
Rok vydání: | 2021 |
Předmět: |
Schedule
task offloading General Computer Science Computer science Distributed computing optimization algorithm Cloud computing 02 engineering and technology Scheduling (computing) Task (project management) 0203 mechanical engineering 0202 electrical engineering electronic engineering information engineering Reinforcement learning General Materials Science deep reinforcement learning Mobile edge computing business.industry General Engineering 020302 automobile design & engineering 020206 networking & telecommunications TK1-9971 Mobile cloud computing resource scheduling Electrical engineering. Electronics. Nuclear engineering Enhanced Data Rates for GSM Evolution business |
Zdroj: | IEEE Access, Vol 9, Pp 85350-85366 (2021) |
ISSN: | 2169-3536 |
DOI: | 10.1109/access.2021.3088124 |
Popis: | Computation-intensive mobile applications are explosively increasing and cause computation overload for smart mobile devices (SMDs). With the assistance of mobile edge computing and mobile cloud computing, SMDs can rent computation resources and offload the computation-intensive applications to edge clouds and remote clouds, which reduces the application completion delay and energy consumption of SMDs. In this paper, we consider the mobile applications with task call graphs and investigate the task offloading and resource scheduling problem in hybrid edge-cloud networks. Due to the interdependency of tasks, time-varying wireless channels, and stochastic available computation resources in the hybrid edge-cloud networks, it is challenging to make task offloading decisions and schedule computation frequencies to minimize the weighted sum of energy, time, and rent cost (ETRC). To address this issue, we propose two efficient algorithms under different conditions of system information. Specifically, with full system information, the task offloading and resource scheduling decisions are determined based on semidefinite relaxation and dual decomposition methods. With partial system information, we propose a deep reinforcement learning framework, where the future system information is inferred by long short-term memory networks. The discrete offloading decisions and continuous computation frequencies are learned by a modified deep deterministic policy gradient algorithm. Extensive simulations evaluate the convergence performance of ETRC with various system parameters. Simulation results also validate the superiority of the proposed task offloading and resource scheduling algorithms over baseline schemes. |
Databáze: | OpenAIRE |
Externí odkaz: |