Zobrazeno 1 - 10
of 141
pro vyhledávání: '"Chen, Anpei"'
Autor:
Esposito, Stefano, Chen, Anpei, Reiser, Christian, Bulò, Samuel Rota, Porzi, Lorenzo, Schwarz, Katja, Richardt, Christian, Zollhöfer, Michael, Kontschieder, Peter, Geiger, Andreas
High-quality real-time view synthesis methods are based on volume rendering, splatting, or surface rendering. While surface-based methods generally are the fastest, they cannot faithfully model fuzzy geometry like hair. In turn, alpha-blending techni
Externí odkaz:
http://arxiv.org/abs/2409.02482
Radiance field methods have achieved photorealistic novel view synthesis and geometry reconstruction. But they are mostly applied in per-scene optimization or small-baseline settings. While several recent works investigate feed-forward reconstruction
Externí odkaz:
http://arxiv.org/abs/2407.04699
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking. However, 3DGS fails to accurately represent surfaces due to the multi-view in
Externí odkaz:
http://arxiv.org/abs/2403.17888
We present NeLF-Pro, a novel representation to model and reconstruct light fields in diverse natural scenes that vary in extent and spatial granularity. In contrast to previous fast reconstruction methods that represent the 3D scene globally, we mode
Externí odkaz:
http://arxiv.org/abs/2312.13328
Autor:
Xu, Haofei, Chen, Anpei, Chen, Yuedong, Sakaridis, Christos, Zhang, Yulun, Pollefeys, Marc, Geiger, Andreas, Yu, Fisher
We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward approach to solving sparse view synthesis under multiple different baseline settings (small and large baselines, and different number of input views). To render a target novel v
Externí odkaz:
http://arxiv.org/abs/2312.04565
As pretrained text-to-image diffusion models become increasingly powerful, recent efforts have been made to distill knowledge from these text-to-image pretrained models for optimizing a text-guided 3D model. Most of the existing methods generate a ho
Externí odkaz:
http://arxiv.org/abs/2312.00093
We present a method for generating high-quality watertight manifold meshes from multi-view input images. Existing volumetric rendering methods are robust in optimization but tend to generate noisy meshes with poor topology. Differentiable rasterizati
Externí odkaz:
http://arxiv.org/abs/2305.17134
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images. Despite numerous task-specific methods, developing a comprehensive model remains challenging. In this paper, we present SSDNeRF, a
Externí odkaz:
http://arxiv.org/abs/2304.06714
We present Factor Fields, a novel framework for modeling and representing signals. Factor Fields decomposes a signal into a product of factors, each represented by a classical or neural field representation which operates on transformed input coordin
Externí odkaz:
http://arxiv.org/abs/2302.01226
Autor:
Song, Liangchen, Chen, Anpei, Li, Zhong, Chen, Zhang, Chen, Lele, Yuan, Junsong, Xu, Yi, Geiger, Andreas
Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an effic
Externí odkaz:
http://arxiv.org/abs/2210.15947