Zobrazeno 1 - 10
of 58
pro vyhledávání: '"Hong, Fangzhou"'
Autor:
Hong, Fangzhou, Guzov, Vladimir, Kim, Hyo Jin, Ye, Yuting, Newcombe, Richard, Liu, Ziwei, Ma, Lingni
As the prevalence of wearable devices, learning egocentric motions becomes essential to develop contextual AI. In this work, we present EgoLM, a versatile framework that tracks and understands egocentric motions from multi-modal inputs, e.g., egocent
Externí odkaz:
http://arxiv.org/abs/2409.18127
Autor:
Guzov, Vladimir, Jiang, Yifeng, Hong, Fangzhou, Pons-Moll, Gerard, Newcombe, Richard, Liu, C. Karen, Ye, Yuting, Ma, Lingni
This paper investigates the online generation of realistic full-body human motion using a single head-mounted device with an outward-facing color camera and the ability to perform visual SLAM. Given the inherent ambiguity of this setup, we introduce
Externí odkaz:
http://arxiv.org/abs/2409.13426
Autor:
Chen, Zhaoxi, Tang, Jiaxiang, Dong, Yuhao, Cao, Ziang, Hong, Fangzhou, Lan, Yushi, Wang, Tengfei, Xie, Haozhe, Wu, Tong, Saito, Shunsuke, Pan, Liang, Lin, Dahua, Liu, Ziwei
The increasing demand for high-quality 3D assets across various industries necessitates efficient and automated 3D content creation. Despite recent advancements in 3D generative models, existing methods still face challenges with optimization speed,
Externí odkaz:
http://arxiv.org/abs/2409.12957
Autor:
Ma, Lingni, Ye, Yuting, Hong, Fangzhou, Guzov, Vladimir, Jiang, Yifeng, Postyeni, Rowan, Pesqueira, Luis, Gamino, Alexander, Baiyya, Vijay, Kim, Hyo Jin, Bailey, Kevin, Fosas, David Soriano, Liu, C. Karen, Liu, Ziwei, Engel, Jakob, De Nardi, Renzo, Newcombe, Richard
We introduce Nymeria - a large-scale, diverse, richly annotated human motion dataset collected in the wild with multiple multimodal egocentric devices. The dataset comes with a) full-body ground-truth motion; b) multiple multimodal egocentric data fr
Externí odkaz:
http://arxiv.org/abs/2406.09905
3D city generation with NeRF-based methods shows promising generation results but is computationally inefficient. Recently 3D Gaussian Splatting (3D-GS) has emerged as a highly efficient alternative for object-level 3D generation. However, adapting 3
Externí odkaz:
http://arxiv.org/abs/2406.06526
Autor:
Yang, Jingkang, Cen, Jun, Peng, Wenxuan, Liu, Shuai, Hong, Fangzhou, Li, Xiangtai, Zhou, Kaiyang, Chen, Qifeng, Liu, Ziwei
We are living in a three-dimensional space while moving forward through a fourth dimension: time. To allow artificial intelligence to develop a comprehensive understanding of such a 4D environment, we introduce 4D Panoptic Scene Graph (PSG-4D), a new
Externí odkaz:
http://arxiv.org/abs/2405.10305
Generating diverse and high-quality 3D assets automatically poses a fundamental yet challenging task in 3D computer vision. Despite extensive efforts in 3D generation, existing optimization-based approaches struggle to produce large-scale 3D assets e
Externí odkaz:
http://arxiv.org/abs/2405.08055
We present FashionEngine, an interactive 3D human generation and editing system that creates 3D digital humans via user-friendly multimodal controls such as natural languages, visual perceptions, and hand-drawing sketches. FashionEngine automates the
Externí odkaz:
http://arxiv.org/abs/2404.01655
Autor:
Zhang, Mingyuan, Jin, Daisheng, Gu, Chenyang, Hong, Fangzhou, Cai, Zhongang, Huang, Jingfang, Zhang, Chongzhi, Guo, Xinying, Yang, Lei, He, Ying, Liu, Ziwei
Human motion generation, a cornerstone technique in animation and video production, has widespread applications in various tasks like text-to-motion and music-to-dance. Previous works focus on developing specialist models tailored for each task witho
Externí odkaz:
http://arxiv.org/abs/2404.01284
Recent 3D human generative models have achieved remarkable progress by learning 3D-aware GANs from 2D images. However, existing 3D human generative methods model humans in a compact 1D latent space, ignoring the articulated structure and semantics of
Externí odkaz:
http://arxiv.org/abs/2404.01241