Optimized Residual Action for Interaction Control with Learned Environments

Autor: Loris Roveda, Francesco Braghin, Pasquale Chiacchio, Enrico Ferrentino, Vincenzo Petrone, Alessandro Pozzi, Luca Puricelli
Rok vydání: 2023
DOI: 10.36227/techrxiv.21905433.v1
Popis: Robotic tasks featuring interaction with other bodies are increasingly required in industrial contexts. The manipulators need to interact with the environment in a compliant way to avoid damage, but, at the same time, are often required to accurately track a reference force. To this aim, interaction controllers are typically employed, but they either need human tinkering for parameter tuning or precise modeling of the environment the robot will interact with. The former is a time-consuming procedure, while the latter is necessarily affected by approximations, which often lead to failure during the actual application. Both these aspects are problematic if it were often necessary to change the contact environment.Current research is concentrating on devising high-performance force controllers that are simple to tune and quick to adapt to changing environments. Along this line, this work proposes a novel control strategy, that we term ORACLE (Optimized Residual Action for interaction Control with Learned Environments). It exploits an ensemble of neural networks to estimate the force generated by the robot-environment interaction. This estimate is input to an optimal residual action controller that locally corrects the main action, output of a base force controller, which guarantees stability. The ORACLE strategy has been implemented and tested in the MuJoCo dynamic simulator and in a real-case scenario, both foreseeing a Franka Emika Panda robot used as a test platform. A reduction in terms of force tracking error is achieved by deploying the proposed strategy, with a short setup time.
Databáze: OpenAIRE