Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals
Autor: | Bao, Daqian, Saad-Falcon, Alex, Romberg, Justin |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Data acquisition in array signal processing (ASP) is costly, as high angular and range resolutions require large antenna apertures and wide frequency bandwidths. Data requirements grow multiplicatively with viewpoints and frequencies, increasing collection burdens. Implicit Neural Representations (INRs)--neural network models of 3D scenes--offer compact, continuous representations with minimal data, interpolating to unseen viewpoints, potentially reducing sampling costs in ASP. We propose the Radon Implicit Field Transform (RIFT), combining a radar forward model (Generalized Radon Transform, GRT) with an INR-based scene representation learned from radar signals. This method extends to other ASP problems by replacing the GRT with appropriate algorithms. In experiments, we synthesize radar data using the GRT and train the INR model by minimizing radar signal reconstruction error. We render the scene using the trained INR and evaluate it against ground truth. We introduce new error metrics: phase-Root Mean Square Error (p-RMSE) and magnitude-Structural Similarity Index Measure (m-SSIM). Compared to traditional scene models, our RIFT model achieves up to 188% improvement in scene reconstruction with only 10% of the data. Using the same amount of data, RIFT achieves 3x better reconstruction and shows a 10% improvement when generalizing to unseen viewpoints. Comment: A version of this work is under review as a submission to ICLR 2025 Conference |
Databáze: | arXiv |
Externí odkaz: |