Learning Wavefront Coding for Extended Depth of Field Imaging
Autor: | Erdem Sahin, Atanas Gotchev, Monjurul Meem, Rajesh Menon, Ugur Akpinar |
---|---|
Přispěvatelé: | Tampere University, Computing Sciences |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Deblurring Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION FOS: Physical sciences 02 engineering and technology Convolutional neural network Computational photography FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering Computer vision Depth of field Spatial analysis business.industry Image and Video Processing (eess.IV) Electrical Engineering and Systems Science - Image and Video Processing 113 Computer and information sciences Refractive lens Computer Graphics and Computer-Aided Design Computer Science::Computer Vision and Pattern Recognition 020201 artificial intelligence & image processing Artificial intelligence business Software Physics - Optics Optics (physics.optics) Wavefront coding |
Zdroj: | IEEE Transactions on Image Processing. 30:3307-3320 |
ISSN: | 1941-0042 1057-7149 |
DOI: | 10.1109/tip.2021.3060166 |
Popis: | Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging. acceptedVersion |
Databáze: | OpenAIRE |
Externí odkaz: |