Deep scene-scale material estimation from multi-view indoor captures
Autor: | Siddhant Prakash, Gilles Rainer, Adrien Bousseau, George Drettakis |
---|---|
Přispěvatelé: | GRAPHics and DEsign with hEterogeneous COntent (GRAPHDECO), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Université Côte d'Azur (UCA) |
Jazyk: | angličtina |
Předmět: |
FOS: Computer and information sciences
I.3 Digital 3D assets Computer Vision and Pattern Recognition (cs.CV) General Engineering Computer Science - Computer Vision and Pattern Recognition Deep learning Computer Graphics and Computer-Aided Design [INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] Graphics (cs.GR) 68U05 Human-Computer Interaction Computer Science - Graphics Photogrammetry Material estimation Synthetic dataset Indoor scenes |
Zdroj: | Computers & Graphics: X Computers & Graphics: X, 2022, ⟨10.1016/j.cag.2022.09.010⟩ |
ISSN: | 0097-8493 2590-1486 |
DOI: | 10.1016/j.cag.2022.09.010 |
Popis: | The movie and video game industries have adopted photogrammetry as a way to create digital 3D assets from multiple photographs of a real-world scene. But photogrammetry algorithms typically output an RGB texture atlas of the scene that only serves as visual guidance for skilled artists to create material maps suitable for physically-based rendering. We present a learning-based approach that automatically produces digital assets ready for physically-based rendering, by estimating approximate material maps from multi-view captures of indoor scenes that are used with retopologized geometry. We base our approach on a material estimation Convolutional Neural Network (CNN) that we execute on each input image. We leverage the view-dependent visual cues provided by the multiple observations of the scene by gathering, for each pixel of a given image, the color of the corresponding point in other images. This image-space CNN provides us with an ensemble of predictions, which we merge in texture space as the last step of our approach. Our results demonstrate that the recovered assets can be directly used for physically-based rendering and editing of real indoor scenes from any viewpoint and novel lighting. Our method generates approximate material maps in a fraction of time compared to the closest previous solutions. 17 pages |
Databáze: | OpenAIRE |
Externí odkaz: |