Zobrazeno 1 - 10
of 476
pro vyhledávání: '"Lin, Guosheng"'
Autor:
Wu, Yizheng, Pan, Zhiyu, Wang, Kewei, Li, Xingyi, Cui, Jiahao, Xiao, Liwen, Lin, Guosheng, Cao, Zhiguo
Large-scale datasets with point-wise semantic and instance labels are crucial to 3D instance segmentation but also expensive. To leverage unlabeled data, previous semi-supervised 3D instance segmentation approaches have explored self-training framewo
Externí odkaz:
http://arxiv.org/abs/2406.16776
Autor:
Chen, Yiwen, He, Tong, Huang, Di, Ye, Weicai, Chen, Sijin, Tang, Jiaxiang, Chen, Xin, Cai, Zhongang, Yang, Lei, Yu, Gang, Lin, Guosheng, Zhang, Chi
Recently, 3D assets created via reconstruction and generation have matched the quality of manually crafted assets, highlighting their potential for replacement. However, this potential is largely unrealized because these assets always need to be conv
Externí odkaz:
http://arxiv.org/abs/2406.10163
Autor:
Huang, Yuzhong, Li, Zhong, Chen, Zhang, Ren, Zhiyuan, Lin, Guosheng, Morstatter, Fred, Xu, Yi
In the evolving landscape of text-to-3D technology, Dreamfusion has showcased its proficiency by utilizing Score Distillation Sampling (SDS) to optimize implicit representations such as NeRF. This process is achieved through the distillation of pretr
Externí odkaz:
http://arxiv.org/abs/2406.10000
Large-scale diffusion models have achieved remarkable performance in generative tasks. Beyond their initial training applications, these models have proven their ability to function as versatile plug-and-play priors. For instance, 2D diffusion models
Externí odkaz:
http://arxiv.org/abs/2406.03293
Autor:
Fu, Zhoujie, Wei, Jiacheng, Shen, Wenhao, Song, Chaoyue, Yang, Xiaofeng, Liu, Fayao, Yang, Xulei, Lin, Guosheng
In this work, we introduce a novel approach for creating controllable dynamics in 3D-generated Gaussians using casually captured reference videos. Our method transfers the motion of objects from reference videos to a variety of generated 3D Gaussians
Externí odkaz:
http://arxiv.org/abs/2405.16849
Reverse engineering CAD models from raw geometry is a classic but challenging research problem. In particular, reconstructing the CAD modeling sequence from point clouds provides great interpretability and convenience for editing. To improve upon thi
Externí odkaz:
http://arxiv.org/abs/2405.15188
In this paper, we address the challenge of reconstructing general articulated 3D objects from a single video. Existing works employing dynamic neural radiance fields have advanced the modeling of articulated objects like humans and animals from video
Externí odkaz:
http://arxiv.org/abs/2404.11151
Autor:
Yang, Fan, Zhang, Jianfeng, Shi, Yichun, Chen, Bowen, Zhang, Chenxu, Zhang, Huichao, Yang, Xiaofeng, Feng, Jiashi, Lin, Guosheng
Benefiting from the rapid development of 2D diffusion models, 3D content creation has made significant progress recently. One promising solution involves the fine-tuning of pre-trained 2D diffusion models to harness their capacity for producing multi
Externí odkaz:
http://arxiv.org/abs/2404.06429
Autor:
Wang, Kewei, Wu, Yizheng, Cen, Jun, Pan, Zhiyu, Li, Xingyi, Wang, Zhe, Cao, Zhiguo, Lin, Guosheng
The perception of motion behavior in a dynamic environment holds significant importance for autonomous driving systems, wherein class-agnostic motion prediction methods directly predict the motion of the entire point cloud. While most existing method
Externí odkaz:
http://arxiv.org/abs/2403.13261
Autor:
Chen, Cheng, Yang, Xiaofeng, Yang, Fan, Feng, Chengzeng, Fu, Zhoujie, Foo, Chuan-Sheng, Lin, Guosheng, Liu, Fayao
Recent works on text-to-3d generation show that using only 2D diffusion supervision for 3D generation tends to produce results with inconsistent appearances (e.g., faces on the back view) and inaccurate shapes (e.g., animals with extra legs). Existin
Externí odkaz:
http://arxiv.org/abs/2403.09140