Autor: |
Jinwei Han, Lihong Kang, Jing Tian, Mingyong Jiang, Ningbo Guo |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Sensors, Vol 24, Iss 20, p 6746 (2024) |
Druh dokumentu: |
article |
ISSN: |
1424-8220 |
DOI: |
10.3390/s24206746 |
Popis: |
Due to the small size of vehicle targets, complex background environments, and the discrete scattering characteristics of high-resolution synthetic aperture radar (SAR) images, existing deep learning networks face challenges in extracting high-quality vehicle features from SAR images, which impacts vehicle localization accuracy. To address this issue, this paper proposes a vehicle localization method for SAR images based on feature reconstruction and aggregation with rotating boxes. Specifically, our method first employs a backbone network that integrates the space-channel reconfiguration module (SCRM), which contains spatial and channel attention mechanisms specifically designed for SAR images to extract features. The network then connects a progressive cross-fusion mechanism (PCFM) that effectively combines multi-view features from different feature layers, enhancing the information content of feature maps and improving feature representation quality. Finally, these features containing a large receptive field region and enhanced rich contextual information are input into a rotating box vehicle detection head, which effectively reduces false alarms and missed detections. Experiments on a complex scene SAR image vehicle dataset demonstrate that the proposed method significantly improves vehicle localization accuracy. Our method achieves state-of-the-art performance, which demonstrates the superiority and effectiveness of the proposed method. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|