Autor: |
Xi, Xiuliang, Jin, Xin, Jiang, Qian, Lin, Yu, Zhou, Wei, Guo, Lei |
Předmět: |
|
Zdroj: |
Advanced Intelligent Systems (2640-4567); Nov2023, Vol. 5 Issue 11, p1-17, 17p |
Abstrakt: |
The purpose of the infrared and visible image fusion is to generate a fused image with rich information. Although most fusion methods can achieve good performance, there are still shortcomings in extracting feature information from source images, which make it difficult to balance the thermal radiation region information and texture detail information in the fused image. To address the above issues, an expectation maximization (EM) learning framework based on adversarial generative networks (GAN) for infrared and visible image fusion is proposed. The EM algorithm (EMA) can obtain maximum likelihood estimation for problems with potential variables, which is helpful in solving the problem of lack of labels in infrared and visible image fusion. The axial‐corner attention mechanism is designed to capture long‐range semantic information and texture information of the visible image. The multifrequency attention mechanism digs the relationships between features at different scales to highlight target information of infrared images in the fused result. Meanwhile, two discriminators are used to balance two different features, and a new loss function is designed to maximize the likelihood estimate of the data with soft class label assignments, which is obtained from the expectation network. Extensive experiments demonstrate the superiority of EMA‐GAN over the state‐of‐the‐art. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|