Autor: |
Hao Huang, Zhaoli Wang, Yaoxing Liao, Weizhi Gao, Chengguang Lai, Xushu Wu, Zhaoyang Zeng |
Jazyk: |
angličtina |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
Ecological Informatics, Vol 84, Iss , Pp 102904- (2024) |
Druh dokumentu: |
article |
ISSN: |
1574-9541 |
DOI: |
10.1016/j.ecoinf.2024.102904 |
Popis: |
Convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) are popular deep learning architectures currently used for rapid flood simulations. However, deep learning algorithms are difficult to explain, like a “black box” that lacks insight. In order to reveal the intrinsic mechanism of prediction by such architectures, we adopted a coupled CNN-LSTM model based on the explainability technique SHapley Additive exPlanations (SHAP) to predict the rainfall-runoff process and identify key input feature factors, and took the Beijiang River Basin in China as an example, so as to improve the explainability and credibility of this black-box model. The results show that the coupled CNN-LSTM model performs better than the flood predictions compared to the individual CNN or LSTM models under the longest foresight period of 25 h. In particular, the Nash-Sutcliffe Efficiency (NSE) of the former model reaches 0.838, while those of the latter two models are 0.737 and 0.745, respectively. The coupled CNN-LSTM model has a high-accuracy prediction capability, consistently exhibiting NSEs greater than 0.8 for different input time steps and foresight periods. The prediction accuracy is mainly influenced by the observed runoff at the downstream hydrological station from previous time points, while the effects of the input time step length and the foresight period are comparatively negligible. This study provides a new perspective for understanding the potential physical mechanism of black-box models for rainfall-runoff prediction and emphasizes the importance and prospect of the application of explainability techniques. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|