Autor: |
Fang, Qiansheng, Zhang, Liang, Yan, Pu |
Zdroj: |
Signal, Image & Video Processing; Jan2025, Vol. 19 Issue 1, p1-8, 8p |
Abstrakt: |
Due to the irregular airflow and instability in the burning process, flames exhibit rich motion characteristics. However, convolutional neural network (CNN) based fire detection methods focus solely on appearance features from individual frames while neglecting the temporal information between frames. In this paper, we propose a fire detection method based on spatio-temporal dual-stream network. The spatial stream extracts the appearance features of the current frame and the temporal stream extracts the motion features between frames. We utilize the lightweight ShuffleNet V2 as the feature extraction network for both the spatial and temporal streams. Additionally, the depth of the temporal stream is pruned to prevent overfitting. And an adaptive feature fusion module (AFFM) is employed to effectively combine spatial and temporal features. Then, the fused features are passed to the detection layer to obtain the detection results. Experimental results demonstrate that our method can achieve 0.863 mean average precision on the test set. Additionally, the precision and recall of our method are 0.904 and 0.813 respectively. Thus, our method is effective in distinguishing between fire and fire-like objects, resulting in superior overall detection performance. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|