Autor: |
Mingzheng Chen, Xing Wang, Meizhen Wang, Xuejun Liu, Yong Wu, Xiaochu Wang |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
Remote Sensing, Vol 14, Iss 22, p 5750 (2022) |
Druh dokumentu: |
article |
ISSN: |
2072-4292 |
DOI: |
10.3390/rs14225750 |
Popis: |
Rainfall data have a profound significance for meteorology, climatology, hydrology, and environmental sciences. However, existing rainfall observation methods (including ground-based rain gauges and radar-/satellite-based remote sensing) are not efficient in terms of spatiotemporal resolution and cannot meet the needs of high-resolution application scenarios (urban waterlogging, emergency rescue, etc.). Widespread surveillance cameras have been regarded as alternative rain gauges in existing studies. Surveillance audio, through exploiting their nonstop use to record rainfall acoustic signals, should be considered a type of data source to obtain high-resolution and all-weather data. In this study, a method named parallel neural network based on attention mechanisms and multi-scale fusion (PNNAMMS) is proposed for automatically classifying rainfall levels by surveillance audio. The proposed model employs a parallel dual-channel network with spatial channel extracting the frequency domain correlation, and temporal channel capturing the time-domain continuity of the rainfall sound. Additionally, attention mechanisms are used on the two channels to obtain significant spatiotemporal elements. A multi-scale fusion method was adopted to fuse different scale features in the spatial channel for more robust performance in complex surveillance scenarios. In experiments showed that our method achieved an estimation accuracy of 84.64% for rainfall levels and outperformed previously proposed methods. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|