Autor: |
Chang Ding, Huilin Mu, Yun Zhang |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Remote Sensing, Vol 16, Iss 4, p 605 (2024) |
Druh dokumentu: |
article |
ISSN: |
2072-4292 |
DOI: |
10.3390/rs16040605 |
Popis: |
Multi-moving-target imaging in a synthetic aperture radar (SAR) system poses a significant challenge owing to target defocusing and being contaminated by strong background clutter. Aiming at this problem, a new deep-convolutional-neural-network (CNN)-assisted method is proposed for multi-moving-target imaging in a SAR-GMTI system. The multi-moving-target signal can be modeled by a multicomponent LFM signal with additive perturbation. A fully convolutional network named MLFMSS-Net was designed based on an encoder–decoder architecture to extract the most-energetic LFM signal component from the multicomponent LFM signal in the time domain. Without prior knowledge of the target number, an iterative signal-separation framework based on the well-trained MLFMSS-Net is proposed to separate the multi-moving-target signal into multiple LFM signal components while eliminating the residual clutter. It works well, exhibiting high imaging robustness and low dependence on the system parameters, making it a suitable solution for practical imaging applications. Consequently, a well-focused multi-moving-target image can be obtained by parameter estimation and secondary azimuth compression for each separated LFM signal component. The simulations and experiments on both airborne and spaceborne SAR data showed that the proposed method is superior to traditional imaging methods in both imaging quality and efficiency. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|