AI for Resource Allocation and Resource Allocation for AI: a two-fold paradigm at the network edge
Autor: | Busacca, Fabio Antonino |
---|---|
Přispěvatelé: | TINNIRELLO, Ilenia |
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: |
Internet Of Thing
MINLP IoT Edge Network Performance Evaluation Low Power Wide Area Network System Modeling Settore ING-INF/03 - Telecomunicazioni UAV Software Defined Radio Real Testbed Vehicular Network ML LoRa Reinforcement Learning Resource Allocation Machine Learning Game Theory Artificial Intelligence AI LPWAN Colosseum Channel Emulator Channel Emulation Emulation SDR |
Popis: | 5G-and-beyond and Internet of Things (IoT) technologies are pushing a shift from the classic cloud-centric view of the network to a new edge-centric vision. In such a perspective, the computation, communication and storage resources are moved closer to the user, to the benefit of network responsiveness/latency, and of an improved context-awareness, that is, the ability to tailor the network services to the live user's experience. However, these improvements do not come for free: edge networks are highly constrained, and do not match the resource abundance of their cloud counterparts. In such a perspective, the proper management of the few available resources is of crucial importance to improve the network performance in terms of responsiveness, throughput, and power consumption. However, networks in the so-called Age of Big Data result from the dynamic interactions of massive amounts of heterogeneous devices. As a consequence, traditional model-based Resource Allocation algorithms fail to cope with this dynamic and complex networks, and are being replaced by more flexible AI-based techniques as a result. In such a way, it is possible to design intelligent resource allocation frameworks, able to quickly adapt to the everchanging dynamics of the network edge, and to best exploit the few available resources. Hence, Artificial Intelligence (AI), and, more specifically Machine Learning (ML) techniques, can clearly play a fundamental role in boosting and supporting resource allocation techniques at the edge. But can AI/ML benefit from optimal Resource Allocation? Recently, the evolution towards Distributed and Federated Learning approaches, i.e. where the learning process takes place in parallel at several devices, has brought important advantages in terms of reduction of the computational load of the ML algorithms, in the amount of information transmitted by the network nodes, and in terms of privacy. However, the scarceness of energy, processing, and, possibly, communication resources at the edge, especially in the IoT case, calls for proper resource management frameworks. In such a view, the available resources should be assigned to reduce the learning time, while also keeping an eye on the energy consumption of the network nodes. According to this perspective, a two-fold paradigm can emerge at the network edge, where AI can boost the performance of Resource Allocation, and, vice versa, optimal Resource Allocation techniques can speed up the learning process of AI algorithms. Part I of this work of thesis explores the first topic, i.e. the usage of AI to support Resource Allocation at the edge, with a specific focus on two use-cases, namely UAV-assisted cellular networks, and vehicular networks. Part II deals instead with the topic of Resource Allocation for AI, and, specifically, with the case of the integration between Federated Learning techniques and the LoRa LPWAN protocol. The designed integration framework has been validated on both simulation environments, and, most importantly, on the Colosseum platform, the biggest channel emulator in the world. |
Databáze: | OpenAIRE |
Externí odkaz: |