DI-Fusion: Online Implicit 3D Reconstruction with Deep Priors
Autor: | Haoxuan Song, Shi-Min Hu, Shi-Sheng Huang, Jiahui Huang |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Artificial neural network Computer science business.industry Computer Vision and Pattern Recognition (cs.CV) 3D reconstruction Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Probabilistic logic Signed distance function Storage efficiency Graphics (cs.GR) Computer Science - Robotics Computer Science - Graphics Prior probability Computer vision Artificial intelligence Representation (mathematics) business Robotics (cs.RO) Pose ComputingMethodologies_COMPUTERGRAPHICS |
Zdroj: | CVPR |
DOI: | 10.1109/cvpr46437.2021.00882 |
Popis: | Previous online 3D dense reconstruction methods struggle to achieve the balance between memory storage and surface quality, largely due to the usage of stagnant underlying geometry representation, such as TSDF (truncated signed distance functions) or surfels, without any knowledge of the scene priors. In this paper, we present DI-Fusion (Deep Implicit Fusion), based on a novel 3D representation, i.e. Probabilistic Local Implicit Voxels (PLIVoxs), for online 3D reconstruction with a commodity RGB-D camera. Our PLIVox encodes scene priors considering both the local geometry and uncertainty parameterized by a deep neural network. With such deep priors, we are able to perform online implicit 3D reconstruction achieving state-of-the-art camera trajectory estimation accuracy and mapping quality, while achieving better storage efficiency compared with previous online 3D reconstruction approaches. Our implementation is available at https://www.github.com/huangjh-pub/di-fusion. |
Databáze: | OpenAIRE |
Externí odkaz: |