Autor: |
Huijiao Qiao, Weiqi Qian, Haifeng Hu, Xingbo Huang, Jiequn Li |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Sensors, Vol 24, Iss 16, p 5205 (2024) |
Druh dokumentu: |
article |
ISSN: |
1424-8220 |
DOI: |
10.3390/s24165205 |
Popis: |
Data and reports indicate an increasing frequency and intensity of natural disasters worldwide. Buildings play a crucial role in disaster responses and damage assessments, aiding in planning rescue efforts and evaluating losses. Despite advances in applying deep learning to building extraction, challenges remain in handling complex natural disaster scenes and reducing reliance on labeled datasets. Recent advances in satellite video are opening a new avenue for efficient and accurate building extraction research. By thoroughly mining the characteristics of disaster video data, this work provides a new semantic segmentation model for accurate and efficient building extraction based on a limited number of training data, which consists of two parts: the prediction module and the automatic correction module. The prediction module, based on a base encoder–decoder structure, initially extracts buildings using a limited amount of training data that are obtained instantly. Then, the automatic correction module takes the output of the prediction module as input, constructs a criterion for identifying pixels with erroneous semantic information, and uses optical flow values to extract the accurate corresponding semantic information on the corrected frame. The experimental results demonstrate that the proposed method outperforms other methods in accuracy and computational complexity in complicated natural disaster scenes. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|