Contrast, Imitate, Adapt: Learning Robotic Skills From Raw Human Videos

Autor: Qian, Zhifeng, You, Mingyu, Zhou, Hongjun, Xu, Xuanhui, Fu, Hao, Xue, Jinzhe, He, Bin
Rok vydání: 2024
Předmět:
Zdroj: 2024 IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
Druh dokumentu: Working Paper
DOI: 10.1109/TASE.2024.3406610
Popis: Learning robotic skills from raw human videos remains a non-trivial challenge. Previous works tackled this problem by leveraging behavior cloning or learning reward functions from videos. Despite their remarkable performances, they may introduce several issues, such as the necessity for robot actions, requirements for consistent viewpoints and similar layouts between human and robot videos, as well as low sample efficiency. To this end, our key insight is to learn task priors by contrasting videos and to learn action priors through imitating trajectories from videos, and to utilize the task priors to guide trajectories to adapt to novel scenarios. We propose a three-stage skill learning framework denoted as Contrast-Imitate-Adapt (CIA). An interaction-aware alignment transformer is proposed to learn task priors by temporally aligning video pairs. Then a trajectory generation model is used to learn action priors. To adapt to novel scenarios different from human videos, the Inversion-Interaction method is designed to initialize coarse trajectories and refine them by limited interaction. In addition, CIA introduces an optimization method based on semantic directions of trajectories for interaction security and sample efficiency. The alignment distances computed by IAAformer are used as the rewards. We evaluate CIA in six real-world everyday tasks, and empirically demonstrate that CIA significantly outperforms previous state-of-the-art works in terms of task success rate and generalization to diverse novel scenarios layouts and object instances.
Databáze: arXiv