ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields
Autor: | Somraj, Nagabhushan, Soundararajan, Rajiv |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | ACM SIGGRAPH 2023 Conference Proceedings, Article 71, Pages 1-11 |
Druh dokumentu: | Working Paper |
DOI: | 10.1145/3588432.3591539 |
Popis: | Neural radiance fields (NeRF) have achieved impressive performances in view synthesis by encoding neural representations of a scene. However, NeRFs require hundreds of images per scene to synthesize photo-realistic novel views. Training them on sparse input views leads to overfitting and incorrect scene depth estimation resulting in artifacts in the rendered novel views. Sparse input NeRFs were recently regularized by providing dense depth estimated from pre-trained networks as supervision, to achieve improved performance over sparse depth constraints. However, we find that such depth priors may be inaccurate due to generalization issues. Instead, we hypothesize that the visibility of pixels in different input views can be more reliably estimated to provide dense supervision. In this regard, we compute a visibility prior through the use of plane sweep volumes, which does not require any pre-training. By regularizing the NeRF training with the visibility prior, we successfully train the NeRF with few input views. We reformulate the NeRF to also directly output the visibility of a 3D point from a given viewpoint to reduce the training time with the visibility constraint. On multiple datasets, our model outperforms the competing sparse input NeRF models including those that use learned priors. The source code for our model can be found on our project page: https://nagabhushansn95.github.io/publications/2023/ViP-NeRF.html. Comment: SIGGRAPH 2023 |
Databáze: | arXiv |
Externí odkaz: |