Dynamic scene novel view synthesis via deferred spatio-temporal consistency
Autor: | Beatrix-Emőke Fülöp-Balogh, Eleanor Tursman, James Tompkin, Julie Digne, Nicolas Bonneel |
---|---|
Přispěvatelé: | Origami (Origami), Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École Centrale de Lyon (ECL), Université de Lyon-Université Lumière - Lyon 2 (UL2)-Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Université Lumière - Lyon 2 (UL2) |
Rok vydání: | 2022 |
Předmět: |
FOS: Computer and information sciences
Human-Computer Interaction Computer Science - Graphics Computer Vision and Pattern Recognition (cs.CV) ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Computer Science - Computer Vision and Pattern Recognition General Engineering Computer Graphics and Computer-Aided Design [INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] Graphics (cs.GR) ComputingMethodologies_COMPUTERGRAPHICS |
Zdroj: | Computers & Graphics. 107:220-230 |
ISSN: | 0097-8493 |
Popis: | Structure from motion (SfM) enables us to reconstruct a scene via casual capture from cameras at different viewpoints, and novel view synthesis (NVS) allows us to render a captured scene from a new viewpoint. Both are hard with casual capture and dynamic scenes: SfM produces noisy and spatio-temporally sparse reconstructed point clouds, resulting in NVS with spatio-temporally inconsistent effects. We consider SfM and NVS parts together to ease the challenge. First, for SfM, we recover stable camera poses, then we defer the requirement for temporally-consistent points across the scene and reconstruct only a sparse point cloud per timestep that is noisy in space-time. Second, for NVS, we present a variational diffusion formulation on depths and colors that lets us robustly cope with the noise by enforcing spatio-temporal consistency via per-pixel reprojection weights derived from the input views. Together, this deferred approach generates novel views for dynamic scenes without requiring challenging spatio-temporally consistent reconstructions nor training complex models on large datasets. We demonstrate our algorithm on real-world dynamic scenes against classic and more recent learning-based baseline approaches. Accompanying video: https://youtu.be/RXK2iv980nU |
Databáze: | OpenAIRE |
Externí odkaz: |