Abstrakt: |
The intensity of an atmospheric river (AR) is only one of the factors influencing the damage it will cause. We use random forest models fit to hazard, exposure, and vulnerability data at different spatial and temporal scales in California to predict the probability that a given AR event will cause flood damage, as measured by National Flood Insurance Program (NFIP) claims. We first demonstrate the usefulness of data-driven models and interpretable machine learning to identify and describe drivers of AR flood damage. Hazard features, particularly measures of AR intensity such as total precipitation, increase the probability of damage with increasing values up to a threshold point, after which the probability of damage saturates. Although hazard is generally the most important risk dimension across all models, exposure and vulnerability contribute up to a third of the explanatory power. Exposure and variability features generally increase the probability of damage with increasing values, apart from a few instances which can be explained by physical intuition, but tend to affect the probability of damage less for the largest AR events. Comparisons between random forest models at different spatial and temporal scales showed general agreement. We then examine limitations inherent in publicly available exposure, vulnerability, and loss data, focusing on the difference in temporal resolution between variables from different risk dimensions and discrepancies between NFIP claims and total flood losses, and describe how those limitations may affect the model results. Overall, the application of interpretable machine learning to understand the contributions of exposure and vulnerability to AR-driven flood risk has identified potential community risk drivers and strategies for resilience, but the results must be considered in the context of the data that produced them. [ABSTRACT FROM AUTHOR] |