Deep Reinforcement Learning for Network Energy Saving in 6G and Beyond Networks
Autor: | Tran, Dinh-Hieu, Van Huynh, Nguyen, Kaada, Soumeya, Vo, Van Nhan, Lagunas, Eva, Chatzinotas, Symeon |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Network energy saving has received great attention from operators and vendors to reduce energy consumption and CO2 emissions to the environment as well as significantly reduce costs for mobile network operators. However, the design of energy-saving networks also needs to ensure the mobile users' (MUs) QoS requirements such as throughput requirements (TR). This work considers a mobile cellular network including many ground base stations (GBSs), and some GBSs are intentionally turned off due to network energy saving (NES) or crash, so the MUs located in these outage GBSs are not served in time. Based on this observation, we propose the problem of maximizing the total achievable throughput in the network by optimizing the GBSs' antenna tilt and adaptive transmission power with a given number of served MUs satisfied. Notice that, the MU is considered successfully served if its Reference Signal Received Power (RSRP) and throughput requirement are satisfied. The formulated optimization problem becomes difficult to solve with multiple binary variables and non-convex constraints along with random throughput requirements and random placement of MUs. We propose a Deep Q-learning-based algorithm to help the network learn the uncertainty and dynamics of the transmission environment. Extensive simulation results show that our proposed algorithm achieves much better performance than the benchmark schemes. Comment: 7 pages, 4 figures |
Databáze: | arXiv |
Externí odkaz: |