Zobrazeno 1 - 10
of 112
pro vyhledávání: '"Bao, Linchao"'
Autor:
Wang, Cong, Kang, Di, Sun, He-Yi, Qian, Shen-Han, Wang, Zi-Xuan, Bao, Linchao, Zhang, Song-Hai
Creating high-fidelity head avatars from multi-view videos is a core issue for many AR/VR applications. However, existing methods usually struggle to obtain high-quality renderings for all different head components simultaneously since they use one s
Externí odkaz:
http://arxiv.org/abs/2404.19026
Rendering photorealistic and dynamically moving human heads is crucial for ensuring a pleasant and immersive experience in AR/VR and video conferencing applications. However, existing methods often struggle to model challenging facial regions (e.g.,
Externí odkaz:
http://arxiv.org/abs/2307.05000
Autor:
Zhang, Jiaxu, Weng, Junwu, Kang, Di, Zhao, Fang, Huang, Shaoli, Zhe, Xuefei, Bao, Linchao, Shan, Ying, Wang, Jue, Tu, Zhigang
A good motion retargeting cannot be reached without reasonable consideration of source-target differences on both the skeleton and shape geometry levels. In this work, we propose a novel Residual RETargeting network (R2ET) structure, which relies on
Externí odkaz:
http://arxiv.org/abs/2303.08658
Autor:
Xiong, Zhangyang, Kang, Di, Jin, Derong, Chen, Weikai, Bao, Linchao, Cui, Shuguang, Han, Xiaoguang
Fast generation of high-quality 3D digital humans is important to a vast number of applications ranging from entertainment to professional concerns. Recent advances in differentiable rendering have enabled the training of 3D generative models without
Externí odkaz:
http://arxiv.org/abs/2302.01162
People may perform diverse gestures affected by various mental and physical factors when speaking the same sentences. This inherent one-to-many relationship makes co-speech gesture generation from audio particularly challenging. Conventional CNNs/RNN
Externí odkaz:
http://arxiv.org/abs/2301.06690
We present a novel audio-driven facial animation approach that can generate realistic lip-synchronized 3D facial animations from the input audio. Our approach learns viseme dynamics from speech videos, produces animator-friendly viseme curves, and su
Externí odkaz:
http://arxiv.org/abs/2301.06059
Autor:
Huang, Ye, Kang, Di, Chen, Liang, Jia, Wenjing, He, Xiangjian, Duan, Lixin, Zhe, Xuefei, Bao, Linchao
Semantic segmentation has recently achieved notable advances by exploiting "class-level" contextual information during learning. However, these approaches simply concatenate class-level information to pixel features to boost the pixel representation
Externí odkaz:
http://arxiv.org/abs/2301.04258
We present a large-scale facial UV-texture dataset that contains over 50,000 high-quality texture UV-maps with even illuminations, neutral expressions, and cleaned facial regions, which are desired characteristics for rendering realistic 3D face mode
Externí odkaz:
http://arxiv.org/abs/2211.13874
Autor:
Liu, Yahui, Sangineto, Enver, Chen, Yajing, Bao, Linchao, Zhang, Haoxian, Sebe, Nicu, Lepri, Bruno, De Nadai, Marco
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain. One important, desired characteristic of these transformations, is their graduality, which corresponds to a smooth change between t
Externí odkaz:
http://arxiv.org/abs/2210.00841
We present a neural network-based system for long-term, multi-action human motion synthesis. The system, dubbed as NEURAL MARIONETTE, can produce high-quality and meaningful motions with smooth transitions from simple user input, including a sequence
Externí odkaz:
http://arxiv.org/abs/2209.13204