Deep Depth from Focal Stack with Defocus Model for Camera-Setting Invariance.

Autor: Fujimura, Yuki, Iiyama, Masaaki, Funatomi, Takuya, Mukaigawa, Yasuhiro
Předmět:
Zdroj: International Journal of Computer Vision; Jun2024, Vol. 132 Issue 6, p1970-1985, 16p
Abstrakt: We propose deep depth from focal stack (DDFS), which takes a focal stack as input of a neural network for estimating scene depth. Defocus blur is a useful cue for depth estimation. However, the size of the blur depends on not only scene depth but also camera settings such as focus distance, focal length, and f-number. Current learning-based methods without any defocus models cannot estimate a correct depth map if camera settings are different at training and test times. Our method takes a plane sweep volume as input for the constraint between scene depth, defocus images, and camera settings, and this intermediate representation enables depth estimation with different camera settings at training and test times. This camera-setting invariance can enhance the applicability of DDFS. The experimental results also indicate that our method is robust against a synthetic-to-real domain gap. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index