Infrared and Visible Image Fusion Based on NSST and RDN.

Autor: Peizhou Yan, Jiancheng Zou, Zhengzheng Li, Xin Yang
Předmět:
Zdroj: Intelligent Automation & Soft Computing; 2021, Vol. 28 Issue 1, p213-225, 13p
Abstrakt: Within the application of driving assistance systems, the detection of driver's facial features in the cab for a spectrum of luminosities is mission critical. One method that addresses this concern is infrared and visible image fusion. Its purpose is to generate an aggregate image which can granularly and systematically illustrate scene details in a range of lighting conditions. Our study introduces a novel approach to this method with marked improvements. We utilize nonsubsampled shearlet transform (NSST) to obtain the low and high frequency sub-bands of infrared and visible imagery. For the low frequency sub-band fusion, we incorporate the local average energy and standard deviation. In the high frequency sub-band, a residual dense network is applied for multiscale feature extraction to generate high frequency sub-band feature maps. We then employ the maximum weighted average algorithm to achieve high frequency sub-band fusion. Finally, we transform the fused low frequency and high frequency subbands by inverse NSST. The results of the experiment and application in real world driving scenarios proved that this method showed excellent performance when objectively compared to the indexing from the other contemporary, industry standard algorithms. In particular, the subjective visual effect, fine texture, and scene were fully expressed, the target's edge distinct was pronounced, and the detailed information of the source image was exhaustively captured. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index