Zobrazeno 1 - 10
of 45
pro vyhledávání: '"Liu, Zhengzhe"'
Recent advancements in diffusion-based video generation have showcased remarkable results, yet the gap between synthetic and real-world videos remains under-explored. In this study, we examine this gap from three fundamental perspectives: appearance,
Externí odkaz:
http://arxiv.org/abs/2406.19568
Autor:
Liu, Zhengzhe, Liu, Qing, Chang, Chirui, Zhang, Jianming, Pakhomov, Daniil, Zheng, Haitian, Lin, Zhe, Cohen-Or, Daniel, Fu, Chi-Wing
Deoccluding the hidden portions of objects in a scene is a formidable task, particularly when addressing real-world scenes. In this paper, we present a new self-supervised PArallel visible-to-COmplete diffusion framework, named PACO, a foundation mod
Externí odkaz:
http://arxiv.org/abs/2406.07706
This paper introduces a new approach based on a coupled representation and a neural volume optimization to implicitly perform 3D shape editing in latent space. This work has three innovations. First, we design the coupled neural shape (CNS) represent
Externí odkaz:
http://arxiv.org/abs/2402.02313
Autor:
Hui, Ka-Hei, Sanghi, Aditya, Rampini, Arianna, Malekshan, Kamal Rahimi, Liu, Zhengzhe, Shayani, Hooman, Fu, Chi-Wing
Significant progress has been made in training large generative models for natural language and images. Yet, the advancement of 3D generative models is hindered by their substantial resource demands for training, along with inefficient, non-compact,
Externí odkaz:
http://arxiv.org/abs/2401.11067
This paper presents a new text-guided technique for generating 3D shapes. The technique leverages a hybrid 3D shape representation, namely EXIM, combining the strengths of explicit and implicit representations. Specifically, the explicit stage contro
Externí odkaz:
http://arxiv.org/abs/2311.01714
In this work, we focus on synthesizing high-quality textures on 3D meshes. We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate 3D consistent and high-quality texture images i
Externí odkaz:
http://arxiv.org/abs/2308.10490
This paper presents CLIPXPlore, a new framework that leverages a vision-language model to guide the exploration of the 3D shape space. Many recent methods have been developed to encode 3D shapes into a learned latent shape space to enable generative
Externí odkaz:
http://arxiv.org/abs/2306.08226
3D scene understanding, e.g., point cloud semantic and instance segmentation, often requires large-scale annotated training data, but clearly, point-wise labels are too tedious to prepare. While some recent methods propose to train a 3D network with
Externí odkaz:
http://arxiv.org/abs/2303.14727
In this paper, we present a new text-guided 3D shape generation approach DreamStone that uses images as a stepping stone to bridge the gap between text and shape modalities for generating 3D shapes without requiring paired text and 3D data. The core
Externí odkaz:
http://arxiv.org/abs/2303.15181
This paper presents a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain. Specifically, we propose a compact wavelet representation with a
Externí odkaz:
http://arxiv.org/abs/2302.00190