Guided Image Deblurring by Deep Multi-Modal Image Fusion

Autor: Yuqi Liu, Zehua Sheng, Hui-Liang Shen
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: IEEE Access, Vol 10, Pp 130708-130718 (2022)
Druh dokumentu: article
ISSN: 2169-3536
DOI: 10.1109/ACCESS.2022.3229056
Popis: Estimating sharp images from blurry observations is still a difficult task in the image processing research field. Previous works may produce deblurred images that lose details or contain artifacts. To deal with this problem, a feasible solution is to seek the help of additional images, such as the near-infrared image and the flashlight image, etc. In this paper, we propose a fusion framework for image deblurring, called Guided Deblurring Fusion Network (GDFNet), to integrate the multi-modal information for better image deblurring performance. Unlike previous works that directly compute a deblurred image using paired multi-modal degraded and guidance images, GDFNet employs image fusion techniques to obtain a deblurred image. GDFNet can combine the advantages by fusing the pre-deblurred streams of single and guided image deblurring using convolutional neural network (CNN). We adopt a blur/residual image splitting strategy by fusing the residual images to enhance the representation ability of encoders and preserve details. We employ a 2-level coarse-to-fine reconstruction strategy to improve the fusion and deblurring performance by supervising its multi-scale output. Quantitative comparisons on multi-modal image datasets show that our GDFNet can recover correct structures and produce fewer artifacts while preserving more details. The peak signal-to-noise ratio (PSNR) of GDFNet evaluated on the blurry/flash dataset is at least 0.9 dB higher than the compared algorithms.
Databáze: Directory of Open Access Journals