Make static person walk again via separating pose action from shape

Autor: Yongwei Nie, Meihua Zhao, Qing Zhang, Ping Li, Jian Zhu, Hongmin Cai
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Graphical Models, Vol 134, Iss , Pp 101222- (2024)
Druh dokumentu: article
ISSN: 1524-0703
DOI: 10.1016/j.gmod.2024.101222
Popis: This paper addresses the problem of animating a person in static images, the core task of which is to infer future poses for the person. Existing approaches predict future poses in the 2D space, suffering from entanglement of pose action and shape. We propose a method that generates actions in the 3D space and then transfers them to the 2D person. We first lift the 2D pose of the person to a 3D skeleton, then propose a 3D action synthesis network predicting future skeletons, and finally devise a self-supervised action transfer network that transfers the actions of 3D skeletons to the 2D person. Actions generated in the 3D space look plausible and vivid. More importantly, self-supervised action transfer allows our method to be trained only on a 3D MoCap dataset while being able to process images in different domains. Experiments on three image datasets validate the effectiveness of our method.
Databáze: Directory of Open Access Journals