Reinforcement Learning Controllers for Soft Robots using Learned Environments
Autor: | Berdica, Uljad, Jackson, Matthew, Veronese, Niccolò Enrico, Foerster, Jakob, Maiolino, Perla |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | 2024 IEEE 7th International Conference on Soft Robotics (RoboSoft), San Diego, CA, USA, 2024, pp. 933-939 |
Druh dokumentu: | Working Paper |
DOI: | 10.1109/RoboSoft60065.2024.10522003 |
Popis: | Soft robotic manipulators offer operational advantage due to their compliant and deformable structures. However, their inherently nonlinear dynamics presents substantial challenges. Traditional analytical methods often depend on simplifying assumptions, while learning-based techniques can be computationally demanding and limit the control policies to existing data. This paper introduces a novel approach to soft robotic control, leveraging state-of-the-art policy gradient methods within parallelizable synthetic environments learned from data. We also propose a safety oriented actuation space exploration protocol via cascaded updates and weighted randomness. Specifically, our recurrent forward dynamics model is learned by generating a training dataset from a physically safe \textit{mean reverting} random walk in actuation space to explore the partially-observed state-space. We demonstrate a reinforcement learning approach towards closed-loop control through state-of-the-art actor-critic methods, which efficiently learn high-performance behaviour over long horizons. This approach removes the need for any knowledge regarding the robot's operation or capabilities and sets the stage for a comprehensive benchmarking tool in soft robotics control. Comment: soft manipulator, reinforcement learning, learned controllers |
Databáze: | arXiv |
Externí odkaz: |