Multimodal Feature-Guided Pretraining for RGB-T Perception
Autor: | Junlin Ouyang, Pengcheng Jin, Qingwang Wang |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol 17, Pp 16041-16050 (2024) |
Druh dokumentu: | article |
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2024.3454054 |
Popis: | Wide-range multiscale object detection for multispectral scene perception from a drone perspective is challenging. Previous RGB-T perception methods directly use backbone pretrained on RGB for thermal infrared feature extraction, leading to unexpected domain shift. We propose a novel multimodal feature-guided masked reconstruction pretraining method, named M2FP, aimed at learning transferable representations for drone-based RGB-T environmental perception tasks without domain bias. This article includes two key innovations as follows. 1) We design a cross-modal feature interaction module in M2FP, which encourages modality-specific backbones to actively learn cross-modal feature representations and avoid modality bias issues. 2) We design a global-aware feature interaction and fusion module suitable for various downstream tasks, which enhances the model's environmental perception from a global perspective in wide-range drone-based scenes. We fine-tune M2FP on the drone-based object detection dataset (DroneVehicle) and semantic segmentation dataset (Kust4K). On these two tasks, compared to the second-best methods, M2FP achieves state-of-the-art performance, with an improvement of 1.8% in mean average precision and 0.9% in mean intersection over union, respectively. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |