Zobrazeno 1 - 10
of 71
pro vyhledávání: '"Zheng, Zerong"'
We present a novel pipeline for learning high-quality triangular human avatars from multi-view videos. Recent methods for avatar learning are typically based on neural radiance fields (NeRF), which is not compatible with traditional graphics pipeline
Externí odkaz:
http://arxiv.org/abs/2407.08414
We present a novel approach for generating 360-degree high-quality, spatio-temporally coherent human videos from a single image. Our framework combines the strengths of diffusion transformers for capturing global correlations across viewpoints and ti
Externí odkaz:
http://arxiv.org/abs/2405.17405
Animatable clothing transfer, aiming at dressing and animating garments across characters, is a challenging problem. Most human avatar works entangle the representations of the human body and clothing together, which leads to difficulties for virtual
Externí odkaz:
http://arxiv.org/abs/2405.07319
Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups. In this paper, we propose Gaussian Head Avatar represented by controllable 3D Gaussians for high-fide
Externí odkaz:
http://arxiv.org/abs/2312.03029
Modeling animatable human avatars from RGB videos is a long-standing and challenging problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent 3D humans, but it remains difficult for pure MLPs to regress pose-dependent
Externí odkaz:
http://arxiv.org/abs/2311.16096
We address the problem of aligning real-world 3D data of garments, which benefits many applications such as texture learning, physical parameter estimation, generative modeling of garments, etc. Existing extrinsic methods typically perform non-rigid
Externí odkaz:
http://arxiv.org/abs/2308.09519
Autor:
Shao, Ruizhi, Sun, Jingxiang, Peng, Cheng, Zheng, Zerong, Zhou, Boyao, Zhang, Hongwen, Liu, Yebin
Recent years have witnessed considerable achievements in editing images with text instructions. When applying these editors to dynamic scene editing, the new-style scene tends to be temporally inconsistent due to the frame-by-frame nature of these 2D
Externí odkaz:
http://arxiv.org/abs/2305.20082
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data. The learnt avatar not only provides expressive control of the body, hands and the face together, but also supports real-time animation and rendering. To thi
Externí odkaz:
http://arxiv.org/abs/2305.04789
Creating pose-driven human avatars is about modeling the mapping from the low-frequency driving pose to high-frequency dynamic human appearances, so an effective pose encoding method that can encode high-fidelity human details is essential to human a
Externí odkaz:
http://arxiv.org/abs/2304.13006
Autor:
Zhang, Hongwen, Lin, Siyou, Shao, Ruizhi, Zhang, Yuxiang, Zheng, Zerong, Huang, Han, Guo, Yandong, Liu, Yebin
Creating animatable avatars from static scans requires the modeling of clothing deformations in different poses. Existing learning-based methods typically add pose-dependent deformations upon a minimally-clothed mesh template or a learned implicit te
Externí odkaz:
http://arxiv.org/abs/2304.03167