Autor: |
McPhail, C., Maier, H. R., Kwakkel, J. H., Giuliani, M., Castelletti, A., Westra, S. |
Předmět: |
|
Zdroj: |
Earth's Future; Feb2018, Vol. 6 Issue 2, p169-191, 23p |
Abstrakt: |
Abstract: Robustness is being used increasingly for decision analysis in relation to deep uncertainty and many metrics have been proposed for its quantification. Recent studies have shown that the application of different robustness metrics can result in different rankings of decision alternatives, but there has been little discussion of what potential causes for this might be. To shed some light on this issue, we present a unifying framework for the calculation of robustness metrics, which assists with understanding how robustness metrics work, when they should be used, and why they sometimes disagree. The framework categorizes the suitability of metrics to a decision‐maker based on (1) the decision‐context (i.e., the suitability of using absolute performance or regret), (2) the decision‐maker's preferred level of risk aversion, and (3) the decision‐maker's preference toward maximizing performance, minimizing variance, or some higher‐order moment. This article also introduces a conceptual framework describing when relative robustness values of decision alternatives obtained using different metrics are likely to agree and disagree. This is used as a measure of how “stable” the ranking of decision alternatives is when determined using different robustness metrics. The framework is tested on three case studies, including water supply augmentation in Adelaide, Australia, the operation of a multipurpose regulated lake in Italy, and flood protection for a hypothetical river based on a reach of the river Rhine in the Netherlands. The proposed conceptual framework is confirmed by the case study results, providing insight into the reasons for disagreements between rankings obtained using different robustness metrics. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|