The Research of Unmanned Aerial Vehicle Video Fusion Evaluation Method Based on Structure Similarity

Autor: Jun Xie, Yingnan Liu, Zhonglin Xu, Xiuying Fan, Shuang Wen
Rok vydání: 2015
Předmět:
Zdroj: Proceedings of the 2015 International conference on Applied Science and Engineering Innovation.
ISSN: 2352-5401
Popis: Through carrying a variety of imaging sensors, the unmanned aerial vehicle began to show its flexibility and importance, image fusion technology becomes more and more indispensible. In this paper, the existing video fusion method and fusion performance evaluation method is analyzed, and an improved video fusion evaluation algorithm based on structure similarity was proposed. The simulation results were shown that the improved fusion performance evaluation method was better than the conventional method, and a method to solve the fusion performance evaluation problem was provided. Introduction Through carrying a variety of imaging sensors, the unmanned aerial vehicle’s ability of autonomous navigation, battlefield reconnaissance, combat visual assessment and tracking target search task were improved. Unmanned aerial vehicle not only can adapt to changes in the complex environment conditions, but also can obtain abundant scene space information. At present the unmanned aerial vehicle imaging sensor can get visible light and infrared video image, and the video images are widely used in the process of practical application. Because of the weather, light and other environmental conditions, the single image sensor were restricted and it was not suitable for all-weather work, meanwhile, the imaging principle of each sensor were different, and the characteristic information of the images were also different. Video Fusion Image fusion technology can integrate two complementary information of different kinds of images, and the obtained fusion images is by integration of their respective advantages, the single sensor in application environment, using range and the limitation of the target acquisition were overcome, at the same time the image spatial resolution and clarity can be improved, the image understanding and recognition were facilitated, the using efficiency of image data were effectively improved. Video is a set of continuous image sequence according to the time sequence actually, it is using the principle of human visual temporarily leave to make the eye with movement feeling by playing a series of images. It is a kind of the most abundant information, intuitive, vivid and concrete bearing information of the media; Video fusion technology is through the integration of multiple visual sensors at the same time or different time to get on the same specific scenarios of video information, so as to enrich video details and enhance cognitive effect. Video fusion technology is widely used in military activity with the rapid development of science and technology. Video fusion and image fusion are the same purpose, in order to obtain the same scene or target with more accurate, comprehensive and reliable description of video images [1]. International Conference on Applied Science and Engineering Innovation (ASEI 2015) © 2015. The authors Published by Atlantis Press 807 Improved Video Fusion Evaluation Algorithm The algorithm is mainly based on the structure similarity information of the image. Due to the human eye vision system’s sensitivity to the structure information of the image, and image structure information loss can well reflect the image distortion, so we can use the structure similarity information of the input images and fused image to evaluate the performance of image fusion algorithm. Structure similarity is defined by Wang [2,3], for image A and image B, the structure similarity is defined: 3 1 2 2 2 2 2 1 2 3 2 2 ( , ) [ ( , )] [ ( , )] [ ( , )] ( ) ( ) ( ) AB A B A B A B A B A B C C C SSIM A B l A B c A B s A B C C C                            (1) In which the l (A, B), c (A, B) and s (A, B) are brightness, contrast and correlation coefficient respectively, for, A  and B  are the average of the image A and image B respectively, A  and B  are the variances of the image A and image B respectively, A  B is the covariance of image A and image B,  ,  and can be adjusted according to the importance of each part, 1 C , 2 C and 3 C are constant, they are used to avoid the condition of zero denominator. Make 1       and 3 2 / 2 C C  , the Eq.1 come to: 1 2 2 2 2 2 1 2 2 2 ( , ) ( )( ) A B AB A B A B C C SSIM A B C C               (2) The video fusion performance evaluation methods based on structure similarity and human vision can evaluate video fusion performance from two aspects of information extraction of time and space and time consistency, and the evaluation results are more close to the subjective evaluation.Based on structure similarity of SSIM, the space fusion performance evaluation index were build, the time performance evaluation index were build according to the SSIM values of each frame difference image between fused video and input video; so the video fusion algorithm performance were evaluated comprehensive [2]. The specific implementation steps include four steps: First, to build a single frame space fusion performance evaluation factors according to the SSIM values between each frame image of the fused video and the input video. The building formula is [4]: , , , , , , , , 1 1 , , , , , 1 1 ( ( )( ( , | )) ( )( ( , | )) ( , , ) ( ( ) ( )) I J a i j t a f i j t b i j t b f i j t i j S t a b f I J a i j t b i j t i j w SSIM V V w w SSIM V V w Q V V V w w  
Databáze: OpenAIRE