Zobrazeno 1 - 10
of 110
pro vyhledávání: '"Huang, Yangyi"'
Autor:
Yuan, Ye, Li, Xueting, Huang, Yangyi, De Mello, Shalini, Nagano, Koki, Kautz, Jan, Iqbal, Umar
Gaussian splatting has emerged as a powerful 3D representation that harnesses the advantages of both explicit (mesh) and implicit (NeRF) 3D representations. In this paper, we seek to leverage Gaussian splatting to generate realistic animatable avatar
Externí odkaz:
http://arxiv.org/abs/2312.11461
Autor:
Liao, Tingting, Yi, Hongwei, Xiu, Yuliang, Tang, Jiaxaing, Huang, Yangyi, Thies, Justus, Black, Michael J.
We introduce TADA, a simple-yet-effective approach that takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures, that can be animated and rendered with traditional graphics pipelines. Existing te
Externí odkaz:
http://arxiv.org/abs/2308.10899
Autor:
Huang, Yangyi, Yi, Hongwei, Xiu, Yuliang, Liao, Tingting, Tang, Jiaxiang, Cai, Deng, Thies, Justus
Despite recent research advancements in reconstructing clothed humans from a single image, accurately restoring the "unseen regions" with high-level details remains an unsolved challenge that lacks attention. Existing methods often generate overly sm
Externí odkaz:
http://arxiv.org/abs/2308.08545
Autor:
Huang, Yangyi, Yi, Hongwei, Liu, Weiyang, Wang, Haofan, Wu, Boxi, Wang, Wenxiao, Lin, Binbin, Zhang, Debing, Cai, Deng
Existing neural rendering methods for creating human avatars typically either require dense input signals such as video or multi-view images, or leverage a learned prior from large-scale specific 3D human datasets such that reconstruction can be perf
Externí odkaz:
http://arxiv.org/abs/2212.02469
Autor:
Huang, Yangyi, Zhong, Haosong, Yang, Rongliang, Pan, Yexin, Lin, Jing, Lee, Connie Kong Wai, Chen, Siyu, Tan, Min, Lu, Xupeng, Poon, Wing Yan, Yuan, Qiaoyaxiao, Li, Mitch Guijun
Publikováno v:
In Biosensors and Bioelectronics 1 September 2024 259
Autor:
Liu, Rui, Deng, Hanming, Huang, Yangyi, Shi, Xiaoyu, Lu, Lewei, Sun, Wenxiu, Wang, Xiaogang, Dai, Jifeng, Li, Hongsheng
Transformer, as a strong and flexible architecture for modelling long-range relations, has been widely explored in vision tasks. However, when used in video inpainting that requires fine-grained representation, existed method still suffers from yield
Externí odkaz:
http://arxiv.org/abs/2109.02974
Autor:
Liu, Rui, Deng, Hanming, Huang, Yangyi, Shi, Xiaoyu, Lu, Lewei, Sun, Wenxiu, Wang, Xiaogang, Dai, Jifeng, Li, Hongsheng
Video inpainting aims to fill the given spatiotemporal holes with realistic appearance but is still a challenging task even with prosperous deep learning approaches. Recent works introduce the promising Transformer architecture into deep video inpain
Externí odkaz:
http://arxiv.org/abs/2104.06637
Autor:
Peng, Xiaoliao, Huang, Yangyi, Wang, Yuliang, Shang, Jianmin, Shen, Yang, Chen, Zhi, Zhou, Xingtao, Han, Tian
Publikováno v:
In Experimental Eye Research February 2024 239
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Huang, Yangyi, Wang, Yuliang, Shen, Yang, Chen, Zhi, Peng, Xiaoliao, Zhang, Luoli, Han, Tian, Zhou, Xingtao
Publikováno v:
In Experimental Eye Research August 2023 233