the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A mathematical framework for quantifying physical damage over time from concurrent and consecutive hazards
Abstract. Space and time play a crucial role in multi-hazard impact assessment. When two or more natural hazards occur at the same location simultaneously or within a short time frame, the physical integrity of assets and infrastructures can be compromised and the resulting damage can be greater than the one generated by individual hazards occurring in isolation. The current literature highlights the lack of quantitative standardised frameworks for multi-hazard impact assessment. This research presents a generalised mathematical framework for quantitatively assessing multi-hazard physical damage on exposed assets, such as buildings or critical infrastructures, over time. The proposed framework covers both concurrent and consecutive hazards, by modelling: (i) the increased damage resulting from the combined impact of two or more concurrent hazards that overlap in space and time, and (ii) the effects of cumulative damage on asset vulnerability and the recovery dynamics in case of consecutive hazards that overlap in space. The framework is applied to a real-world case study in Puerto Rico, including the concurrent wind and flood impacts generated by the passage of Hurricane Maria, as well as the consecutive impacts caused by the subsequent seismic sequence of 2019–2020. Based on simulations performed on a building portfolio, we found that neglecting residual damages caused by the hurricane when assessing the impacts of the subsequent earthquake would lead to a significant underestimation of the overall damage experienced by the assets. By providing a generalised formalisation to perform quantitative multi-hazard impact assessment, able to account for amplification phenomena and recovery dynamics, the framework can offer scientists and decision-makers a comprehensive and deeper understanding of the impacts caused by compound and consecutive events.
- Preprint
(13426 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Comment on egusphere-2025-2379', Anonymous Referee #1, 18 Aug 2025
-
RC2: 'Comment on egusphere-2025-2379', Anonymous Referee #2, 26 Sep 2025
Dear Authors,
Thank you very much for submitting your article. Apologies for the delay in my review. I had to read it a couple of times to make a proper judgment.
While I do think the paper is very well written, it does lack a clear novelty. I understand this comes across as negative, but I would like to explain below why I believe that the paper in its current form is (in my opinion), not novel enough to be published in NHESS.
First of all, I think that the term ‘mathematical framework’ feels a bit of a stretch. They are somewhat simple equations that are merely a mathematical formulation of the recovery curves as presented by De Ruiter et al. (2019) in the paper “Why we can no longer ignore consecutive disasters”. As such (and also been mentioned in the abstract) I think the formulas that are presented are a generalized formulation, rather than a mathematical framework.
And I think that also links to the lack of novelty of this study. While nicely written up, what is actually new in this formulation? Is the novelty that not too many paper wrote this up in this way? Perhaps. But then I would still expect much more clear examples with a clear time dimension included that shows how this really works. Or is the key element of this work the code behind this study? But would it perhaps not have been better to submit this to a journal like Journal for Open Source Software (JOSS), to emphasize the open source availability of the modelling framework, instead of pushing this into “a new mathematical framework”. I do also like this random event generator in the github repo. Can indeed be nice to play around with stress testing assets in a specific area in a multi-hazard concept. But I don’t think I read that really in the paper?
Now, from the Puerto Rico case study example, it is really not clear how actually the recovery response is modelled. It is also highlighted in the discussion that many unknowns are still there with respect to the recovery dynamics. But how are those recovery dynamics actually incorporated in the modelling as presented here. I see it from the mathematical formulations, but not in the application? There does not seem to be a time dimension included, but just an implementation on the fragility curves in a multi-hazard setting?
The code that is presented behind this paper looks nice, and is cleanly written up. There I can see that (I think) in run_framework.py that the recovery curve indeed determines the starting state of the assets when the new hazard hits. However, this is not really clear from the Puerto Rico example.
With reference to the fragility curves, because there is no clear implementation of this recovery aspect in the application, there does not seem to be much novelty in the fragility curves. It is almost a directly implementation from HAZUS?
And to continue on this crucial point of the time component, here those non-physical asset damages are really becoming a key element. Dynamic modelling of the recovery process (which is a key element of the mathematical framework as presented here) goes really beyond the physical asset damages. And specially beyond what the example now shows with the “simple” multi-hazard or the modification of the fragility curves with different state dependencies.
I think many of my points are also summed in section 4.1 and section 4.2. For example, I would expect that a paper with this title would show mathematical formulations that move away from this simplification. This also links to the key points mentioned in the “Better understanding and modelling recovery dynamics” and “Evaluation dynamic exposure over time”. I understand that the equations as presented in this paper could perhaps provide a starting point to all of this, but I feel like some of these elements should be included already to warrant publication on a leading journal in the field, such as NHESS.
So to conclude: I think the paper is published with the idea to publish the python code. I do not think that the theory presented in this paper (and the case study results) are very exciting by themselves, and are not necessarily very novel. However, the code that is presented contains a bunch of nice elements, that I have not really seen in, for example, the DamageScanner or Delft-FIAT. There might be elements of this code in CLIMADA, but that one is sometimes a bit hard to understand what’s all in there. As such, I propose to rather focus on writing an article, to a suitable journal, that mostly focuses on the publication of the code.
Citation: https://doi.org/10.5194/egusphere-2025-2379-RC2
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 762 | 125 | 17 | 904 | 17 | 28 |
- HTML: 762
- PDF: 125
- XML: 17
- Total: 904
- BibTeX: 17
- EndNote: 28
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Summary
Major Concerns
Core Contribution and Scientific Value
Language and Presentation
Recommendations for Improvement
Minor comments