Zobrazeno 1 - 10
of 27
pro vyhledávání: '"Pang, Anqi"'
Autor:
Cheng, Wei, Mu, Juncheng, Zeng, Xianfang, Chen, Xin, Pang, Anqi, Zhang, Chi, Wang, Zhibin, Fu, Bin, Yu, Gang, Liu, Ziwei, Pan, Liang
Texturing is a crucial step in the 3D asset production workflow, which enhances the visual appeal and diversity of 3D assets. Despite recent advancements in Text-to-Texture (T2T) generation, existing methods often yield subpar results, primarily due
Externí odkaz:
http://arxiv.org/abs/2411.02336
Autor:
Chen, Sijin, Chen, Xin, Pang, Anqi, Zeng, Xianfang, Cheng, Wei, Fu, Yijun, Yin, Fukun, Wang, Yanru, Wang, Zhibin, Zhang, Chi, Yu, Jingyi, Yu, Gang, Fu, Bin, Chen, Tao
The polygon mesh representation of 3D data exhibits great flexibility, fast rendering speed, and storage efficiency, which is widely preferred in various applications. However, given its unstructured graph representation, the direct generation of hig
Externí odkaz:
http://arxiv.org/abs/2405.20853
Autor:
Zhang, Longwen, Wang, Ziyu, Zhang, Qixuan, Qiu, Qiwei, Pang, Anqi, Jiang, Haoran, Yang, Wei, Xu, Lan, Yu, Jingyi
In the realm of digital creativity, our potential to craft intricate 3D worlds from imagination is often hampered by the limitations of existing digital tools, which demand extensive expertise and efforts. To narrow this disparity, we introduce CLAY,
Externí odkaz:
http://arxiv.org/abs/2406.13897
Autor:
Sun, Guoxing, Chen, Xin, Chen, Yizhang, Pang, Anqi, Lin, Pei, Jiang, Yuheng, Xu, Lan, Wang, Jingya, Yu, Jingyi
4D reconstruction of human-object interaction is critical for immersive VR/AR experience and human activity understanding. Recent advances still fail to recover fine geometry and texture results from sparse RGB inputs, especially under challenging hu
Externí odkaz:
http://arxiv.org/abs/2108.00362
Recent neural rendering approaches for human activities achieve remarkable view synthesis results, but still rely on dense input views or dense training with all the capture frames, leading to deployment difficulty and inefficient training overload.
Externí odkaz:
http://arxiv.org/abs/2107.06505
Markerless motion capture and understanding of professional non-daily human movements is an important yet unsolved task, which suffers from complex motion patterns and severe self-occlusion, especially for the monocular setting. In this paper, we pro
Externí odkaz:
http://arxiv.org/abs/2104.11452
Capturing challenging human motions is critical for numerous applications, but it suffers from complex motion patterns and severe self-occlusion under the monocular setting. In this paper, we propose ChallenCap -- a template-based approach to capture
Externí odkaz:
http://arxiv.org/abs/2103.06747
In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation. To
Externí odkaz:
http://arxiv.org/abs/1904.02601
Publikováno v:
In Journal of Electroanalytical Chemistry 1 May 2017 792:88-94
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.