LExCI: A Framework for Reinforcement Learning with Embedded Systems
Autor: | Badalian, Kevin, Koch, Lucas, Brinkmann, Tobias, Picerno, Mario, Wegener, Marius, Lee, Sung-Yong, Andert, Jakob |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | Applied Intelligence (2024) |
Druh dokumentu: | Working Paper |
DOI: | 10.1007/s10489-024-05573-0 |
Popis: | Advances in artificial intelligence (AI) have led to its application in many areas of everyday life. In the context of control engineering, reinforcement learning (RL) represents a particularly promising approach as it is centred around the idea of allowing an agent to freely interact with its environment to find an optimal strategy. One of the challenges professionals face when training and deploying RL agents is that the latter often have to run on dedicated embedded devices. This could be to integrate them into an existing toolchain or to satisfy certain performance criteria like real-time constraints. Conventional RL libraries, however, cannot be easily utilised in conjunction with that kind of hardware. In this paper, we present a framework named LExCI, the Learning and Experiencing Cycle Interface, which bridges this gap and provides end-users with a free and open-source tool for training agents on embedded systems using the open-source library RLlib. Its operability is demonstrated with two state-of-the-art RL-algorithms and a rapid control prototyping system. Comment: The code, models, and data used for this work are available in a separate branch of LExCI's GitHub repository (https://github.com/mechatronics-RWTH/lexci-2/tree/lexci_paper). This paper has been submitted to Applied Intelligence (https://link.springer.com/journal/10489). 2024-06-27: Updated the footnote on the title page so that it provides information about the paper's Version of Record |
Databáze: | arXiv |
Externí odkaz: |