A detail review and analysis on deep learning based fusion of IR and visible images.

Autor: Shihabudeen, H., Rajeesh, J.
Předmět:
Zdroj: AIP Conference Proceedings; 2024, Vol. 2965 Issue 1, p1-12, 12p
Abstrakt: A more informative picture is produced by integrating complimentary data from several image modalities. Infrared photos may show the scene's temperature distribution, while visible photographs can provide textual information. While infrared pictures are unaffected by light availability, visible images are affected by the quantity of lighting. When it comes to identifying tiny targets, infrared imagers are useful since they can gather more information from an object's thermal characteristics. By merging infrared and visible images, a more detailed and realistic composite is created, which is in line with how the human visual system works. There are a number of fusion techniques that may be used to create fused images, each with its own unique level of fusion performance. At the same time, the computational benefits of deep learning approaches allow them to generate more attention. Many fields may benefit from fused pictures, including object detection, remote sensing, surveillance, and many more. In this research, we take a close look at the deep learning techniques that have been developed to fuse infrared and visible data. We used objective assessment measures such as entropy, mutual information, structural similarity index measure, etc., to evaluate the approaches' performance both statistically and qualitatively. After weighing the benefits and drawbacks of the current approach, the article wraps up by discussing the current state of deep learning for IR and visible picture fusion and offering some thoughts on potential future research. Researchers in the field of infrared and visible image fusion may find this review article useful. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index