UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model
Autor: | Yan, Haonan, Chen, Jiaqi, Zhang, Xujie, Zhang, Shengkai, Jiao, Nianhong, Liang, Xiaodan, Zheng, Tianxiang |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Zdroj: | ICCV 2021 |
Druh dokumentu: | Working Paper |
Popis: | Recovering dense human poses from images plays a critical role in establishing an image-to-surface correspondence between RGB images and the 3D surface of the human body, serving the foundation of rich real-world applications, such as virtual humans, monocular-to-3d reconstruction. However, the popular DensePose-COCO dataset relies on a sophisticated manual annotation system, leading to severe limitations in acquiring the denser and more accurate annotated pose resources. In this work, we introduce a new 3D human-body model with a series of decoupled parameters that could freely control the generation of the body. Furthermore, we build a data generation system based on this decoupling 3D model, and construct an ultra dense synthetic benchmark UltraPose, containing around 1.3 billion corresponding points. Compared to the existing manually annotated DensePose-COCO dataset, the synthetic UltraPose has ultra dense image-to-surface correspondences without annotation cost and error. Our proposed UltraPose provides the largest benchmark and data resources for lifting the model capability in predicting more accurate dense poses. To promote future researches in this field, we also propose a transformer-based method to model the dense correspondence between 2D and 3D worlds. The proposed model trained on synthetic UltraPose can be applied to real-world scenarios, indicating the effectiveness of our benchmark and model. Comment: Accepted to ICCV 2021 |
Databáze: | arXiv |
Externí odkaz: |