Autor: |
YU Bin, LI Xue-hua, PAN Chun-yu, LI Na |
Jazyk: |
čínština |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
Jisuanji kexue, Vol 49, Iss 7, Pp 248-253 (2022) |
Druh dokumentu: |
article |
ISSN: |
1002-137X |
DOI: |
10.11896/jsjkx.210400219 |
Popis: |
Mobile Edge Computing(MEC) is used to enhance data processing in low power networks,and it has become an efficient computing paradigm.This paper considers an edge-cloud collaborative system composed of multiple MTs and adopts a variety of offloading modes.In order to reduce the total time delay of MTs,a task offloading algorithm based on deep reinforcement learning is proposed.It implements deep neural network(DNN) as a scalable solution,learns the multi-base offloading mode from experience to minimize the total time delay.Simulation results indicate that compared with the deep Q network(DQN) algorithm and the deep deterministic policy gradient(DDPG) algorithm,the proposed algorithm can improve the maximum performance gain significantly.In addition,the proposed algorithm has good convergence,and its result can approach the optimal result obtained by exhaustive search. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|