Zobrazeno 1 - 10
of 96
pro vyhledávání: '"Li, Ruihui"'
Multimodal large language models (MLLMs) have demonstrated strong performance across various tasks without requiring training from scratch. However, they face significant computational and memory constraints, particularly when processing multimodal i
Externí odkaz:
http://arxiv.org/abs/2410.07278
In this paper, we present a method, VectorPD, for converting a given human face image into a vector portrait sketch. VectorPD supports different levels of abstraction by simply controlling the number of strokes. Since vector graphics are composed of
Externí odkaz:
http://arxiv.org/abs/2410.04182
Scene sketching is to convert a scene into a simplified, abstract representation that captures the essential elements and composition of the original scene. It requires semantic understanding of the scene and consideration of different regions within
Externí odkaz:
http://arxiv.org/abs/2410.04072
In this paper, we present a new text-guided 3D shape generation approach DreamStone that uses images as a stepping stone to bridge the gap between text and shape modalities for generating 3D shapes without requiring paired text and 3D data. The core
Externí odkaz:
http://arxiv.org/abs/2303.15181
This paper presents a new approach for 3D shape generation, inversion, and manipulation, through a direct generative modeling on a continuous implicit representation in wavelet domain. Specifically, we propose a compact wavelet representation with a
Externí odkaz:
http://arxiv.org/abs/2302.00190
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain. Specifically, we propose a compact wavelet representation with a pair of coarse and detail coef
Externí odkaz:
http://arxiv.org/abs/2209.08725
Text-guided 3D shape generation remains challenging due to the absence of large paired text-shape data, the substantial semantic gap between these two modalities, and the structural complexity of 3D shapes. This paper presents a new framework called
Externí odkaz:
http://arxiv.org/abs/2209.04145
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology. Beyond previous works, we learn a topology-aware neural template specific to each input then deform the template to reconstruct
Externí odkaz:
http://arxiv.org/abs/2206.04942
Autor:
Li, Ruihui, Yu, Mengdi, Wang, Chengliang, Sun, Jingjiang, Xiao, Hongjie, He, Jianjiang, Wang, Qingfu
Publikováno v:
In Next Materials January 2025 6
This work presents an innovative method for point set self-embedding, that encodes the structural information of a dense point set into its sparser version in a visual but imperceptible form. The self-embedded point set can function as the ordinary d
Externí odkaz:
http://arxiv.org/abs/2202.13577