Indoor Scene Reconstruction From Monocular Video Combining Contextual and Geometric Priors

Autor: Mingyun Wen, Xuanyu Sheng, Kyungeun Cho
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: IEEE Access, Vol 12, Pp 153360-153369 (2024)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2024.3481250
Popis: Recent advancements in three-dimensional (3D) indoor scene reconstruction from monocular videos using deep learning have gained considerable attention. However, existing methods remain insufficient compared to reconstructions using data obtained from 3D sensors. This is primarily because video data lacks explicit depth information. Depth inference from monocular videos is reliant on visual cues, such as texture, which can become ambiguous owing to lighting, reflections, and material properties. Most existing methods utilize convolutional neural networks (CNN) for feature extraction and integrate features from multiple viewpoints to generate 3D features. However, CNNs cannot capture effective features in areas with unclear visual cues owing to their limited perceptual fields in shallow layers. Thus, to overcome these issues, this study proposes a keyframe feature-generation module employing a pretrained vision transformer (ViT) that capitalize on their global perception to infer and synthesize features from areas with ambiguous visual cues. In addition, we employ a pretrained multi-view stereo network to generate the cost volume as a geometric feature. Moreover, the geometric features are further enhanced via the features extracted from a ViT. The effectiveness of the proposed approach is demonstrated on real-world datasets compared to existing methods.
Databáze: Directory of Open Access Journals