the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Invited perspectives: Safeguarding the usability and credibility of flood hazard and risk assessments
Abstract. Flood hazard and risk assessments (FHRAs), and the underlying models, form the basis of decisions regarding flood mitigation and climate adaptation measures and are thus imperative for safeguarding communities from the devastating consequences of flood events. In this perspective paper, we discuss how FHRAs should be validated to be fit-for-purpose in order to optimally support decision-making. We argue that current validation approaches focus on technical issues, with insufficient consideration of the context in which decisions are made. To address this issue, we propose a novel validation framework for FHRAs, structured in a three-level hierarchy: process-based, outcome-based, and impact-based. Our framework adds crucial dimensions to current validation approaches, such as the need to understand the possible impacts on society when the assessment has large errors. It further emphasizes the essential role of stakeholder participation, objectivity, and verifiability when assessing flood hazard and risk. Using the example of flood emergency management, we discuss how the proposed framework can be implemented. Although we have developed the framework for flooding, our ideas are also applicable to assessing risk caused by other types of natural hazards.
- Preprint
(913 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (extended)
-
RC1: 'Comment on egusphere-2024-856', Thorsten Wagener, 24 May 2024
reply
Merz and colleagues provide an interesting, well supported and relevant discussion of the issue of validation in the context of flood hazard/risk assessments. My comments below hopefully help to clarify some points and to push their discussion just a little bit further. My points are not listed in order of importance.
[1] One issue that could be better explained is the meaning of key terminology used. The authors state (line 149…): “Firstly, validation can establish legitimacy but not truth.” The relevance of this statement is difficult to understand unless you tell the audience that the actual meaning of the term “validation” is. The main argument of Oreskes (which the authors cite) is that validation translates into something like “establishing the truth”. It would be helpful to include this here. The same terminology clarity holds for the term verification that the authors also use.
[2] A wider point. I find this discussion of “true” in the context of models really unhelpful. A model cannot be true (in my opinion). A model is by definition a simplification of reality. At least for environmental systems, I do not see how there could be a single way to simplify the system reach a “true” model. This is why (if I remember correctly) Oreskes and colleagues suggest using the term evaluation instead of validation. Why did the authors not include this element in their discussion, but chose to assume that validation as a term is what we should continue to use?
[3] The authors state (Line 152…): “A model that does not reproduce observed data indicates a flaw in the modelling, but the reverse is not true” Well, no model perfectly reproduces observations (at least not in our field). So, we are generally talking about how well or how poorly as model reproduces observations.
[4] The authors state (Line 154): “Finally, validation is a matter of degree, a value judgement within a particular decision-making context. Validation therefore constitutes a subjective process.” Yes, I agree that validation is a question of degree. So, validation must contain subjective choices (of thresholds), but does that make the process subjective?
[5] In the context of what the authors present, isn’t the key question how we decide on appropriate thresholds for the matter of degree of validation? The current discussion does not say much about how we find and agree on these thresholds.
[6] One issue is the full acceptance or the complete rejection of models for a specific purpose in the context of model validation in most (all?) studies. Isn’t this black and white view a key problem? How do you consider imperfect suitability of models? In current studies that include some type of validation, there is generally no consideration of the degree in which a model failed to reproduce the observations into future predictions. How would we solve this problem?
The authors state that (Line 352) “It is therefore important to state the range for which the model is credible.”. However, I do not think this is solving the issue. For once, it still implies that the model is correct if used in the right range, which I think is a very strong assumption.
[7] A key problem for impact of risk models – as far as I know – is the lack of data on flood (or other perils) impact data (e.g. doi.org/10.5194/gmd-14-351-2021). One reason why CAT modelling in practice is so concentrated with a few firms (which own such data). How can we overcome this problem? How can we “validate” without data?
[8] I like the inclusion of Sensitivity Analysis as strategy in Table 1 and in the wider discussion in the paper. Though I do think that its value is wider than discussed here. Wagener et al. (2022) discuss at least four questions that this approach can address in the context of model evaluation (used to avoid the term validation in line with the ideas of Oreskes et al.): (1) Do modeled dominant process controls match our system perception? (2) Is my model's sensitivity to changing forcing as expected? (3) Do modeled decision levers show adequate influence? (4) Can we attribute uncertainty sources throughout the projection horizon?
[9] Another interesting reference for the authors might be the study by Eker et al. (2018) who reviewed validation practices. They found, among other things, a total dominance of validation strategies using fit to historical observations (even in the context of climate change studies).
[10] Some of the points discussed here are also part of what others have called uncertainty auditing (doi.org/10.5194/hess-27-2523-2023) or sensitivity auditing (doi.org/10.1016/j.futures.2022.103041). These ideas might be interesting to the authors.
Thorsten Wagener
References
Eker, S., Rovenskaya, E., Obersteiner, M., & Langan, S. (2018). Practice and perspectives in the validation of resource management models. Nature Communications, 9, 5359.
Wagener, T., Reinecke, R., & Pianosi, F. (2022). On the evaluation of climate change impact models. WIREs Climate Change. https://doi.org/10.1002/wcc.772
Citation: https://doi.org/10.5194/egusphere-2024-856-RC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
199 | 105 | 21 | 325 | 17 | 20 |
- HTML: 199
- PDF: 105
- XML: 21
- Total: 325
- BibTeX: 17
- EndNote: 20
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1