Autor: |
Tianyi Liu, Ruyu Luo, Fangmin Xu, Chaoqiong Fan, Chenglin Zhao |
Jazyk: |
angličtina |
Rok vydání: |
2020 |
Předmět: |
|
Zdroj: |
Sensors, Vol 20, Iss 4, p 973 (2020) |
Druh dokumentu: |
article |
ISSN: |
1424-8220 |
DOI: |
10.3390/s20040973 |
Popis: |
With the development of global urbanization, the Internet of Things (IoT) and smart cities are becoming hot research topics. As an emerging model, edge computing can play an important role in smart cities because of its low latency and good performance. IoT devices can reduce time consumption with the help of a mobile edge computing (MEC) server. However, if too many IoT devices simultaneously choose to offload the computation tasks to the MEC server via the limited wireless channel, it may lead to the channel congestion, thus increasing time overhead. Facing a large number of IoT devices in smart cities, the centralized resource allocation algorithm needs a lot of signaling exchange, resulting in low efficiency. To solve the problem, this paper studies the joint policy of communication and computing of IoT devices in edge computing through game theory, and proposes distributed Q-learning algorithms with two learning policies. Simulation results show that the algorithm can converge quickly with a balanced solution. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|