Autor: |
Ma, Decao, Xian, Yong, Li, Bing, Li, Shaopeng, Zhang, Daqiao |
Předmět: |
|
Zdroj: |
Visual Computer; Feb2024, Vol. 40 Issue 2, p1289-1298, 10p |
Abstrakt: |
This study proposes an Infrared (IR) generative adversarial network (IR-GAN) to generate high-quality IR images using visible images, based on a conditional generative adversarial network. IR-GAN improves texture loss and edge distortion during infrared image generation and includes a novel generator implementing a U-Net architecture based on ConvNeXt (UConvNeXt). This approach enhances the utilization of underlying and deep features in the image during the upsampling process using two types of skip connections, thereby improving texture information. IR-GAN also adds gradient vector loss to generator training, which effectively improves the edge extraction capabilities of the generator. In addition, a multi-scale PatchGAN was included in IR-GAN to enrich local and global image features. Results produced by the proposed model were compared to those of the Pix2Pix and ThermalGAN architectures applied to the IVFG dataset and assessed using five evaluation metrics. Our method produced a structural similarity index measure (SSIM) 10.1% higher than that of Pix2Pix and 12.4% higher than ThermalGAN for the IVFG dataset. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|