Zobrazeno 1 - 10
of 100
pro vyhledávání: '"Bulò, Samuel Rota"'
Autor:
Esposito, Stefano, Chen, Anpei, Reiser, Christian, Bulò, Samuel Rota, Porzi, Lorenzo, Schwarz, Katja, Richardt, Christian, Zollhöfer, Michael, Kontschieder, Peter, Geiger, Andreas
High-quality real-time view synthesis methods are based on volume rendering, splatting, or surface rendering. While surface-based methods generally are the fastest, they cannot faithfully model fuzzy geometry like hair. In turn, alpha-blending techni
Externí odkaz:
http://arxiv.org/abs/2409.02482
Autor:
Müller, Norman, Schwarz, Katja, Roessle, Barbara, Porzi, Lorenzo, Bulò, Samuel Rota, Nießner, Matthias, Kontschieder, Peter
We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible expla
Externí odkaz:
http://arxiv.org/abs/2406.18524
Autor:
Chen, Jun-Kun, Bulò, Samuel Rota, Müller, Norman, Porzi, Lorenzo, Kontschieder, Peter, Wang, Yu-Xiong
This paper proposes ConsistDreamer - a novel framework that lifts 2D diffusion models with 3D awareness and 3D consistency, thus enabling high-fidelity instruction-guided scene editing. To overcome the fundamental limitation of missing 3D consistency
Externí odkaz:
http://arxiv.org/abs/2406.09404
Autor:
Fischer, Tobias, Kulhanek, Jonas, Bulò, Samuel Rota, Porzi, Lorenzo, Pollefeys, Marc, Kontschieder, Peter
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas. Existing works are not well suited for applications like mixed-reality or closed-loop simulation due to their limited visual qu
Externí odkaz:
http://arxiv.org/abs/2406.03175
In this paper, we address the limitations of Adaptive Density Control (ADC) in 3D Gaussian Splatting (3DGS), a scene representation method achieving high-quality, photorealistic results for novel view synthesis. ADC has been introduced for automatic
Externí odkaz:
http://arxiv.org/abs/2404.06109
We estimate the radiance field of large-scale dynamic areas from multiple vehicle captures under varying environmental conditions. Previous works in this domain are either restricted to static environments, do not scale to more than a single short vi
Externí odkaz:
http://arxiv.org/abs/2404.00168
Autor:
Turki, Haithem, Agrawal, Vasu, Bulò, Samuel Rota, Porzi, Lorenzo, Kontschieder, Peter, Ramanan, Deva, Zollhöfer, Michael, Richardt, Christian
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render. One reason is that they make use of volume rendering, thus requiring many samples (and model queries) per ray at render time. Although this represen
Externí odkaz:
http://arxiv.org/abs/2312.03160
Autor:
Xu, Linning, Agrawal, Vasu, Laney, William, Garcia, Tony, Bansal, Aayush, Kim, Changil, Bulò, Samuel Rota, Porzi, Lorenzo, Kontschieder, Peter, Božič, Aljaž, Lin, Dahua, Zollhöfer, Michael, Richardt, Christian
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields. To this end, we designed and built a custom multi-camera rig to densely c
Externí odkaz:
http://arxiv.org/abs/2311.02542
Autor:
Roessle, Barbara, Müller, Norman, Porzi, Lorenzo, Bulò, Samuel Rota, Kontschieder, Peter, Nießner, Matthias
Publikováno v:
ACM Transactions on Graphics, Vol. 42, No. 6, Article 207 (2023) 1-14
Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes. Our goal is to mitiga
Externí odkaz:
http://arxiv.org/abs/2306.06044
Autor:
Sarlin, Paul-Edouard, DeTone, Daniel, Yang, Tsun-Yi, Avetisyan, Armen, Straub, Julian, Malisiewicz, Tomasz, Bulo, Samuel Rota, Newcombe, Richard, Kontschieder, Peter, Balntas, Vasileios
Humans can orient themselves in their 3D environments using simple 2D maps. Differently, algorithms for visual localization mostly rely on complex 3D point clouds that are expensive to build, store, and maintain over time. We bridge this gap by intro
Externí odkaz:
http://arxiv.org/abs/2304.02009