Zobrazeno 1 - 10
of 175
pro vyhledávání: '"Porzi, P."'
Autor:
Chao, Brian, Tseng, Hung-Yu, Porzi, Lorenzo, Gao, Chen, Li, Tuotuo, Li, Qinbo, Saraf, Ayush, Huang, Jia-Bin, Kopf, Johannes, Wetzstein, Gordon, Kim, Changil
3D Gaussian Splatting (3DGS) has recently emerged as a state-of-the-art 3D reconstruction and rendering technique due to its high-quality results and fast training and rendering time. However, pixels covered by the same Gaussian are always shaded in
Externí odkaz:
http://arxiv.org/abs/2411.18625
Autor:
Roessle, Barbara, Müller, Norman, Porzi, Lorenzo, Bulò, Samuel Rota, Kontschieder, Peter, Dai, Angela, Nießner, Matthias
We propose L3DG, the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation. This enables effective generative 3D modeling, scaling to generation of entire room-scale scenes which can be very effi
Externí odkaz:
http://arxiv.org/abs/2410.13530
Autor:
Esposito, Stefano, Chen, Anpei, Reiser, Christian, Bulò, Samuel Rota, Porzi, Lorenzo, Schwarz, Katja, Richardt, Christian, Zollhöfer, Michael, Kontschieder, Peter, Geiger, Andreas
High-quality real-time view synthesis methods are based on volume rendering, splatting, or surface rendering. While surface-based methods generally are the fastest, they cannot faithfully model fuzzy geometry like hair. In turn, alpha-blending techni
Externí odkaz:
http://arxiv.org/abs/2409.02482
Autor:
Müller, Norman, Schwarz, Katja, Roessle, Barbara, Porzi, Lorenzo, Bulò, Samuel Rota, Nießner, Matthias, Kontschieder, Peter
We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible expla
Externí odkaz:
http://arxiv.org/abs/2406.18524
Autor:
Chen, Jun-Kun, Bulò, Samuel Rota, Müller, Norman, Porzi, Lorenzo, Kontschieder, Peter, Wang, Yu-Xiong
This paper proposes ConsistDreamer - a novel framework that lifts 2D diffusion models with 3D awareness and 3D consistency, thus enabling high-fidelity instruction-guided scene editing. To overcome the fundamental limitation of missing 3D consistency
Externí odkaz:
http://arxiv.org/abs/2406.09404
Autor:
Fischer, Tobias, Kulhanek, Jonas, Bulò, Samuel Rota, Porzi, Lorenzo, Pollefeys, Marc, Kontschieder, Peter
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas. Existing works are not well suited for applications like mixed-reality or closed-loop simulation due to their limited visual qu
Externí odkaz:
http://arxiv.org/abs/2406.03175
In this paper, we address the limitations of Adaptive Density Control (ADC) in 3D Gaussian Splatting (3DGS), a scene representation method achieving high-quality, photorealistic results for novel view synthesis. ADC has been introduced for automatic
Externí odkaz:
http://arxiv.org/abs/2404.06109
In this paper, we address common error sources for 3D Gaussian Splatting (3DGS) including blur, imperfect camera poses, and color inconsistencies, with the goal of improving its robustness for practical applications like reconstructions from handheld
Externí odkaz:
http://arxiv.org/abs/2404.04211
We estimate the radiance field of large-scale dynamic areas from multiple vehicle captures under varying environmental conditions. Previous works in this domain are either restricted to static environments, do not scale to more than a single short vi
Externí odkaz:
http://arxiv.org/abs/2404.00168
Autor:
Turki, Haithem, Agrawal, Vasu, Bulò, Samuel Rota, Porzi, Lorenzo, Kontschieder, Peter, Ramanan, Deva, Zollhöfer, Michael, Richardt, Christian
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render. One reason is that they make use of volume rendering, thus requiring many samples (and model queries) per ray at render time. Although this represen
Externí odkaz:
http://arxiv.org/abs/2312.03160