the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Global sensitivity analysis of large-scale flood loss models
Abstract. Flood loss models are increasingly used in the (re)insurance sector to inform a range of financial decisions. These models simulate the interactions between flood hazard, vulnerability and exposure over large spatial domains, requiring a range of input information and modelling assumptions. Due to this high level of complexity, evaluating the impact of uncertain input data and assumptions on modelling results, and therefore the overall model “acceptability”, remains a very complex process. In this paper, we advocate for the use of global sensitivity analysis (GSA), a generic technique to analyse the propagation of multiple uncertainties through mathematical models, to improve the sensitivity testing of flood loss models and the identification of their key sources of uncertainty. We discuss key challenges in the application of GSA to large-scale flood models, propose pragmatic strategies to overcome these challenges, and showcase the type of insights that can be obtained by GSA through two proof-of-principle applications to a commercial model, JBA Risk Management’s flood loss model, for the transboundary Rhine River basin in Europe, and Queensland in Australia.
- Preprint
(2489 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 05 Oct 2025)
- CC1: 'Comment on egusphere-2025-3310', Adam Pollack, 25 Aug 2025 reply
-
CC2: 'Comment on egusphere-2025-3310', Yukiko Hirabayashi, 08 Sep 2025
reply
General comments
Pianosi et al. examined dominant input uncertainties on flood-induced annual loss, at Rhine River basin in Europe and Queensland state in Australia. They used a JBA Risk Management’s flood loss model to conduct a global sensitivity analysis. Overall, this study represents a contribution to large-scale flood model community and has the potential to provide the model developers to improve model performance by showing the key sources of uncertainty.
I believe that the research methods and the obtained results are described clearly. One of my concern lies in the characterization of uncertainty presented in Table 1. First, I do not fully understand the rationale for assigning a ±50% range to the nonlinear value of the return period. Furthermore, the return periods used in hazard maps and loss calculations are generally predetermined based on river improvement standards as well as the economic and geographical conditions of the site. Therefore, I find it difficult to grasp the reasoning behind discussing uncertainty by varying this value itself.
With regard to the flood event set, I can accept the idea of examining the uncertainty range of the event generation model. However, in determining the ranges of parametric and structural uncertainty in the flood inundation model, I believe that some evidence-based justification should be provided. For example, one might reasonably consider the typical error ranges of global models in precipitation data, evapotranspiration processes, or river discharge calculations.
Since variations in the vulnerability curve directly influence the loss calculation, I believe that the method for determining the range of uncertainty should also be explained with due care. As for the damage ratio, I assume it is determined based on structural types, for instance, distinguishing between above-floor and below-floor inundation. Regarding the lower bound of the depth range, is it assumed to correspond to the height of the first floor? In Table1, DR_min can be set to zero, but does this indicate that, even at the same inundation depth, cases where no damage occurred are taken as the basis for assigning zero damage? Providing explanations for these points would enable a more informed assessment of whether the results are valid.
Finally, the most novel aspect of this study appears to be the simultaneous evaluation of multiple sources of uncertainty. However, because the adopted approach counts only the most influential factor in each comparison, the current method of defining uncertainty ranges reveals that the vulnerability curve contributes the largest share, followed by the flood event set. Yet, the description is insufficient regarding how these uncertainties ultimately affect AAL or LE. For example, if the vulnerability curve were increased by 30% (on the conservative side) and AAL recalculated, by how much would the AAL value change? Would it be by 10%, or by a factor of three? We can see part of the answer from Figure 7 regarding to the AAL in the case of Queensland, but explicitly addressing the magnitude of such differences in the target outcomes would, in my view, greatly strengthen the analysis.
Specific comments
Fig.1: The upper part of Fig. 1 is almost identical to the text and therefore cannot be considered an effective schematic figure. The lower illustration comparing “precise” and “accurate” is easy to understand, but I would suggest reconsidering whether this figure is truly necessary.
Fig.3, caption: I did not find the use of colored boxes particularly effective. In the figure, the hazard map and event set are shown as inputs to the “interpolation” box, but in reality, the flood depth is selected from the chosen event set. Therefore, I believe there may be a more appropriate way of illustrating this process than labeling it as interpolation.
L294: Is there any rationale for setting the value to 0.3? How would the results change if a different value, for example 0.5, were used instead?
L338-349: You have evaluated which factors influence the basin as a whole, but is it appropriate to assess locations that experienced the most severe damage—for example, areas with greater inundation depth—in the same manner as those less affected? For flood-prone areas, might we expect different results, such as certain uncertainties having a greater impact? Since Fig. 6 already organizes the analysis by different return periods, I thought it might be useful to expand the explanation in connection with that figure.
L411-427: Could it be that the aggregation method employed here is problematic? In particular, the fact that the residential portion changes so drastically gives the impression that there may be an issue with the approach itself. If you have any ideas or suggestions for how this could be improved, adding a brief comment on that point would, I believe, be valuable.
Technical corrections
L121: Is “Ls,t” a typographical error for “Ls,k”?
L1599: Is “(FDs,t) a typographical error for “(FDs,k)”?
Citation: https://doi.org/10.5194/egusphere-2025-3310-CC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
979 | 69 | 14 | 1,062 | 22 | 18 |
- HTML: 979
- PDF: 69
- XML: 14
- Total: 1,062
- BibTeX: 22
- EndNote: 18
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
It was nice to see this paper show up on my ResearchGate feed. I greatly appreciate the overview of outcome-based and process-based evaluation of models and the observational challenges for the former. As a reader, I'm more inclined to like a paper that takes this framing. With that in mind, I hope the reviewers and editor will pay attention to a few things that jumped out to me:
1) The manuscript insufficiently reviews and cites relevant literature. For an incomplete set of studies that do some form of global sensitivity analysis for either a full risk-modelling workflow or some component (e.g., inundation modeling) please see: