Zobrazeno 1 - 10
of 45
pro vyhledávání: '"LI, ZHENGQIN"'
Autor:
Gao, Will, Wang, Dilin, Fan, Yuchen, Bozic, Aljaz, Stuyck, Tuur, Li, Zhengqin, Dong, Zhao, Ranjan, Rakesh, Sarafianos, Nikolaos
We present a novel approach to mesh shape editing, building on recent progress in 3D reconstruction from multi-view images. We formulate shape editing as a conditional reconstruction problem, where the model must reconstruct the input shape with the
Externí odkaz:
http://arxiv.org/abs/2412.08641
Autor:
Fischer, Michael, Li, Zhengqin, Nguyen-Phuoc, Thu, Bozic, Aljaz, Dong, Zhao, Marshall, Carl, Ritschel, Tobias
A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene. We here ask the question whether we can transfer the appearance from a source NeRF onto a target 3D geometry in a semantically meaningful way, such
Externí odkaz:
http://arxiv.org/abs/2402.08622
Autor:
Bartrum, Edward, Nguyen-Phuoc, Thu, Xie, Chris, Li, Zhengqin, Khan, Numair, Avetisyan, Armen, Lanman, Douglas, Xiao, Lei
We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene editing method that enables the replacement of specific objects within a scene. Given multi-view images of a scene, a text prompt describing the object to replace, and a text
Externí odkaz:
http://arxiv.org/abs/2401.17895
Autor:
Lin, Zhi-Hao, Huang, Jia-Bin, Li, Zhengqin, Dong, Zhao, Richardt, Christian, Li, Tuotuo, Zollhöfer, Michael, Kopf, Johannes, Wang, Shenlong, Kim, Changil
While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting
Externí odkaz:
http://arxiv.org/abs/2401.12977
Autor:
Liang, Yiqing, Khan, Numair, Li, Zhengqin, Nguyen-Phuoc, Thu, Lanman, Douglas, Tompkin, James, Xiao, Lei
We propose a method that achieves state-of-the-art rendering quality and efficiency on monocular dynamic scene reconstruction using deformable 3D Gaussians. Implicit deformable representations commonly model motion with a canonical space and time-dep
Externí odkaz:
http://arxiv.org/abs/2312.11458
Publikováno v:
SIGGRAPH Asia 2023 Conference Papers (SA Conference Papers '23), December 12--15, 2023, Sydney, NSW, Australia
We introduce differentiable indirection -- a novel learned primitive that employs differentiable multi-scale lookup tables as an effective substitute for traditional compute and data operations across the graphics pipeline. We demonstrate its flexibi
Externí odkaz:
http://arxiv.org/abs/2309.08387
We propose a physically-motivated deep learning framework to solve a general version of the challenging indoor lighting estimation problem. Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given imag
Externí odkaz:
http://arxiv.org/abs/2305.04374
Autor:
Sun, Cheng, Cai, Guangyan, Li, Zhengqin, Yan, Kai, Zhang, Cheng, Marshall, Carl, Huang, Jia-Bin, Zhao, Shuang, Dong, Zhao
Reconstructing the shape and spatially varying surface appearances of a physical-world object as well as its surrounding illumination based on 2D images (e.g., photographs) of the object has been a long-standing problem in computer vision and graphic
Externí odkaz:
http://arxiv.org/abs/2304.13445
Autor:
Yeh, Yu-Ying, Li, Zhengqin, Hold-Geoffroy, Yannick, Zhu, Rui, Xu, Zexiang, Hašan, Miloš, Sunkavalli, Kalyan, Chandraker, Manmohan
Most indoor 3D scene reconstruction methods focus on recovering 3D geometry and scene layout. In this work, we go beyond this to propose PhotoScene, a framework that takes input image(s) of a scene along with approximately aligned CAD geometry (eithe
Externí odkaz:
http://arxiv.org/abs/2207.00757
Real-world applications require a robot operating in the physical world with awareness of potential risks besides accomplishing the task. A large part of risky behaviors arises from interacting with objects in ignorance of affordance. To prevent the
Externí odkaz:
http://arxiv.org/abs/2206.12784