Large-Scale Graph Reinforcement Learning in Wireless Control Systems

Autor: Lima, Vinicius, Eisen, Mark, Gatsis, Konstantinos, Ribeiro, Alejandro
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
Popis: Modern control systems routinely employ wireless networks to exchange information between spatially distributed plants, actuators and sensors. With wireless networks defined by random, rapidly changing transmission conditions that challenge assumptions commonly held in the design of control systems, proper allocation of communication resources is essential to achieve reliable operation. Designing resource allocation policies, however, is challenging, motivating recent works to successfully exploit deep learning and deep reinforcement learning techniques to design resource allocation and scheduling policies for wireless control systems (WCSs). As the number of learnable parameters in a neural network grows with the size of the input signal, deep reinforcement learning may fail to scale, limiting the immediate generalization of such scheduling and resource allocation policies to large-scale systems. The interference and fading patterns among plants and controllers in the network, however, induce a time-varying graph that can be used to construct policy representations based on graph neural networks (GNNs), with the number of learnable parameters now independent of the number of plants in the network. We further establish in the context of WCSs that, due to inherent invariance to graph permutations, the GNN is able to model scalable and transferable resource allocation policies, which are subsequently trained with primal-dual reinforcement learning. Numerical experiments show that the proposed graph reinforcement learning approach yields policies that not only outperform baseline solutions and deep reinforcement learning based policies in large-scale systems, but that can also be transferred across networks of varying size.
Comment: Submitted to IEEE Transactions on Control of Network Systems (TCNS)
Databáze: arXiv