Zobrazeno 1 - 10
of 319
pro vyhledávání: '"Wu Hongzhi"'
Autor:
Ma, Xiaohe, Deschaintre, Valentin, Hašan, Miloš, Luan, Fujun, Zhou, Kun, Wu, Hongzhi, Hu, Yiwei
High-quality material generation is key for virtual environment authoring and inverse rendering. We propose MaterialPicker, a multi-modal material generator leveraging a Diffusion Transformer (DiT) architecture, improving and simplifying the creation
Externí odkaz:
http://arxiv.org/abs/2412.03225
Autor:
Feng, Xiang, Yu, Chang, Bi, Zoubin, Shang, Yintong, Gao, Feng, Wu, Hongzhi, Zhou, Kun, Jiang, Chenfanfu, Yang, Yin
Recent image-to-3D reconstruction models have greatly advanced geometry generation, but they still struggle to faithfully generate realistic appearance. To address this, we introduce ARM, a novel method that reconstructs high-quality 3D meshes and re
Externí odkaz:
http://arxiv.org/abs/2411.10825
Publikováno v:
ACM SIGGRAPH Asia 2024 Conference Papers
We present a spatial and angular Gaussian based representation and a triple splatting process, for real-time, high-quality novel lighting-and-view synthesis from multi-view point-lit input images. To describe complex appearance, we employ a Lambertia
Externí odkaz:
http://arxiv.org/abs/2410.11419
Autor:
Liu, Minghua, Zeng, Chong, Wei, Xinyue, Shi, Ruoxi, Chen, Linghao, Xu, Chao, Zhang, Mengqi, Wang, Zhaoning, Zhang, Xiaoshuai, Liu, Isabella, Wu, Hongzhi, Su, Hao
Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work,
Externí odkaz:
http://arxiv.org/abs/2408.10198
Autor:
Feng, Yutao, Shang, Yintong, Feng, Xiang, Lan, Lei, Zhe, Shandian, Shao, Tianjia, Wu, Hongzhi, Zhou, Kun, Su, Hao, Jiang, Chenfanfu, Yang, Yin
We present ElastoGen, a knowledge-driven AI model that generates physically accurate 4D elastodynamics. Unlike deep models that learn from video- or image-based observations, ElastoGen leverages the principles of physics and learns from established m
Externí odkaz:
http://arxiv.org/abs/2405.15056
Publikováno v:
ACM SIGGRAPH 2024 Conference Proceedings
This paper presents a novel method for exerting fine-grained lighting control during text-driven diffusion-based image generation. While existing diffusion models already have the ability to generate images under any lighting condition, without addit
Externí odkaz:
http://arxiv.org/abs/2402.11929
Autor:
Feng, Yutao, Feng, Xiang, Shang, Yintong, Jiang, Ying, Yu, Chang, Zong, Zeshun, Shao, Tianjia, Wu, Hongzhi, Zhou, Kun, Jiang, Chenfanfu, Yang, Yin
We demonstrate the feasibility of integrating physics-based animations of solids and fluids with 3D Gaussian Splatting (3DGS) to create novel effects in virtual scenes reconstructed using 3DGS. Leveraging the coherence of the Gaussian Splatting and P
Externí odkaz:
http://arxiv.org/abs/2401.15318
We present the first real-time method for inserting a rigid virtual object into a neural radiance field, which produces realistic lighting and shadowing effects, as well as allows interactive manipulation of the object. By exploiting the rich informa
Externí odkaz:
http://arxiv.org/abs/2310.05837
Publikováno v:
ACM SIGGRAPH 2023 Conference Proceedings
This paper presents a novel neural implicit radiance representation for free viewpoint relighting from a small set of unstructured photographs of an object lit by a moving point light source different from the view position. We express the shape as a
Externí odkaz:
http://arxiv.org/abs/2308.13404
Autor:
Feng, Xiang, Kang, Kaizhang, Pei, Fan, Ding, Huakeng, You, Jinjiang, Tan, Ping, Zhou, Kun, Wu, Hongzhi
We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are subsequently fed to a multi-view ster
Externí odkaz:
http://arxiv.org/abs/2308.03492