Learning high-level robotic manipulation actions with visual predictive model.

Autor: Ma, Anji, Chi, Guoyi, Ivaldi, Serena, Chen, Lipeng
Předmět:
Zdroj: Complex & Intelligent Systems; Feb2024, Vol. 10 Issue 1, p811-823, 13p
Abstrakt: Learning visual predictive models has great potential for real-world robot manipulations. Visual predictive models serve as a model of real-world dynamics to comprehend the interactions between the robot and objects. However, prior works in the literature have focused mainly on low-level elementary robot actions, which typically result in lengthy, inefficient, and highly complex robot manipulation. In contrast, humans usually employ top–down thinking of high-level actions rather than bottom–up stacking of low-level ones. To address this limitation, we present a novel formulation for robot manipulation that can be accomplished by pick-and-place, a commonly applied high-level robot action, through grasping. We propose a novel visual predictive model that combines an action decomposer and a video prediction network to learn the intrinsic semantic information of high-level actions. Experiments show that our model can accurately predict the object dynamics (i.e., the object movements under robot manipulation) while trained directly on observations of high-level pick-and-place actions. We also demonstrate that, together with a sampling-based planner, our model achieves a higher success rate using high-level actions on a variety of real robot manipulation tasks. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index