Deep Q‐learning based resource allocation in industrial wireless networks for URLLC
Autor: | Dong-Seong Kim, Rizki Rivai Ginanjar, Sanjay Bhardwaj |
---|---|
Rok vydání: | 2020 |
Předmět: |
business.industry
Computer science Wireless network Distributed computing Control (management) Q-learning 020206 networking & telecommunications 020302 automobile design & engineering 02 engineering and technology Computer Science Applications 0203 mechanical engineering 0202 electrical engineering electronic engineering information engineering Wireless Resource allocation Reinforcement learning Electrical and Electronic Engineering business |
Zdroj: | IET Communications. 14:1022-1027 |
ISSN: | 1751-8636 |
DOI: | 10.1049/iet-com.2019.1211 |
Popis: | Ultra-reliable low-latency communication (URLLC) is one of the promising services offered by fifth-generation technology for an industrial wireless network. Moreover, reinforcement learning is gaining attention due to its potential to learn from observed as well as unobserved results. Industrial wireless nodes (IWNs) may vary dynamically due to inner or external variables and thus require a depreciation of the dispensable redesign of the network resource allocation. Traditional methods are explicitly programmed, making it difficult for networks to dynamically react. To overcome such a scenario, deep Q-learning (DQL)-based resource allocation strategies as per the learning of the experienced trade-offs' and interdependencies in IWN is proposed. The proposed findings indicate that the algorithm can find the best performing measures to improve the allocation of resources. Moreover, DQL further reinforces to achieve better control to have ultra-reliable and low-latency IWN. Extensive simulations show that the suggested technique leads to the distribution of URLLC resources in fairness manner. In addition, the authors also assess the impact on resource allocation by the DQL's inherent learning parameters. |
Databáze: | OpenAIRE |
Externí odkaz: |