the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Impacts from cascading multi-hazards using hypergraphs: a case study from the 2015 Gorkha earthquake in Nepal
Abstract. This study introduces a new approach to multi-hazard risk assessment, leveraging hypergraph theory to model the interconnected risks posed by cascading natural hazards. Traditional single-hazard risk models fail to account for the complex interrelationships and compounding effects of multiple simultaneous or sequential hazards. By conceptualising risks within a hypergraph framework, our model overcomes these limitations, enabling efficient simulation of multi-hazard interactions and their impacts on infrastructure. We apply this model to the 2015 Mw 7.8 Gorkha earthquake in Nepal as a case study, demonstrating its ability to simulate the primary and secondary effects of the earthquake on buildings and roads across the whole earthquake-affected area. The model predicts the overall pattern of earthquake-induced building damage and landslide impacts, albeit with a tendency towards over-prediction. Our findings underscore the potential of the hypergraph approach for multi-hazard risk assessment, offering advances in rapid computation and scenario exploration for cascading geo-hazards. This approach could provide valuable insights for disaster risk reduction and humanitarian contingency planning, where anticipation of large-scale trends is often more important than prediction of detailed impacts.
- Preprint
(4049 KB) - Metadata XML
-
Supplement
(4431 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-1374', Anonymous Referee #1, 09 Jul 2024
Dear Authors,
I was invited to review the Manuscript Number: egusphere-2024-1374 “Impacts from cascading multi-hazards using hypergraphs: a case study from the 2015 Gorkha earthquake in Nepal”.
The use of graphs is certainly very interesting for exploring interactions between multi-hazards. The challenge of applying such a method on a large national scale is overcome by the proposed hypergraph approach. The main benefit derived is efficiency. The work is clearly and well-presented in its overall logic. The research question, case study, and methodology provide the necessary information to appreciate the approach in its generality. However, some passages are unclear and require minor additional information, as detailed below.
I would like to highlight a general aspect that I believe needs further clarification: are the limitations of this approach attributed to the use of hypergraphs or to the individual models used to model specific hazards (e.g., fragility functions, estimation of susceptibility maps, etc.)? In my understanding, the limitations are due to the choice of the latter. If so, I think it is important to clarify this in the discussion and propose alternatives for future implementations that could improve these limitations. Additionally, what are the advantages of using hypergraphs beyond the computational efficiency that makes them applicable on a large scale?
The graph methodologies in risk assessment allow, among other things, the analysis of graph topology to highlight potential systemic behavior and impact propagation mechanisms (see as example ref 1). Is this possible with hypergraphs? I invite the authors to consider to discuss these potential applications or limitations. My question upon reading the novelty of the manuscript is whether hypergraphs are an innovative algorithm extending traditional risk of multi-hazard methodologies (beyond the multi-layer single hazard) to larger scales (thanks to their efficiency) or if they introduce a conceptually different approach to impact estimation? Please clarify this aspect in the discussion section
Detailed aspects include:
- Lines 284-292 are unclear; please reformulate with more explanations for clarity.
- The susceptibility section was too hasty, particularly the process of identifying "slope units" and the relationship between "slope units" and the extension of the landslide. Additionally, it is unclear whether the buildings and roads affected by landslides are only those falling within the "slope units" or if there is some estimation of the landslide's influence area. In either case, further explanation is needed.
- A table summarizing the data used, specifying the main characteristics, including the different resolutions used could help the reading.
- Figures 5 and 7 use a continuous scale for discrete colors, which is not intuitive. I suggest to explore other legend options.
- The quality of the figures is low, which may be a pre-print issue. I suggest to check before the final version.
***
ref 1: Arosio, M., Martina, M.L.V., Figueiredo, R., “The whole is greater than the sum of its parts: A holistic graph-based assessment approach for natural hazard risk of complex systems.”, Natural Hazards and Earth System Sciences, 2020, 20(2), pp. 521–547
Citation: https://doi.org/10.5194/egusphere-2024-1374-RC1 -
AC1: 'Reply on RC1', Alexandre Dunant, 29 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1374/egusphere-2024-1374-AC1-supplement.pdf
-
RC2: 'Comment on egusphere-2024-1374', Cees van Westen, 17 Jul 2024
Dear Authors,
I was invited to review your very interesting paper. The paper presents a hyperedge model approach for multi-hazard risk assessment, which is innovative and worth publishing. The main advantage of the method is the increased speed of calculation, and the possibility to generate many ensembles incorporating the uncertainty of the risk components.
Concerning the computational advantages of this model, I would have liked to see more information on the way of calculating using hypergraphs, in terms of software, platforms, and calculation speed.
The model is demonstrated on a national scale in Nepal, with the aim to evaluate the damage to individual buildings and road segments. Yet the input data is gridded, the analysis is done on slope units, and the specific exposure of individual buildings cannot be assessed, as well as the runout of landslides. It is not clear to me why you wanted to go to this building level for a national scale analysis.
Whereas the hyperedge framework has a large potential for multi-hazard risk assessment, the multi-hazard modeling component in this paper is still rather modest. The only interaction that is considered is related to earthquake-induced landslides. Other possible follow-up cascading events, such as landslide dams, and their break up, or debris flows are not considered. The potential applicability of the proposed method for such more advanced interactions is mentioned in the discussion section, but not further worked out. This would be a nice topic for a follow-up publication.
The paper demonstrates its applicability to simulate the earthquake damage to buildings, and the earthquake-induced landslide damage to buildings and roads for the 2015 Gorkha event. The model is trained using the 2015 Gorkha event landslide inventory, so it is in a way to be expected that the spatial patterns resemble the actual damage patterns.
Figure 1. Figure 1A is mostly a single hazard (although considering earthquake-induced landslides) and is also of rather poor quality. What if you would combine earthquakes with flooding, then the slope unit approach would not be appropriate? The concept of hypergraph is not clear from Figure 1b. Why are there three hypergraphs and not one or two in this example? What determines the number?
The method doesn’t seem to analyze the direct damage due to ground shaking to roads. Why is that not considered?
Figure 2: the multi-hazard interactions when including rainfall are not addressed clearly in this figure. Rainfall-induced flooding will not be suitable to consider at a slope unit level. Debris flows also may cover several units. And the other multi-hazard interactions (e.g. landslide dams) and post-earthquake reactivation of landslides are not considered sufficiently in this figure. Also not that the exact interaction between landslides and infrastructure is not considered. Perhaps you could draw some hypothetical slope units with roads and buildings, and show which components are in the model and which are not?
As you mentioned in the discussion section the model might not work for debris flows that reach valleys, and to settlements and roads that are located at the outlet of valleys, as the model only considers slope units.
It would be relevant to explain how the building dataset from METEOR was generated and how this could include the construction types and building values for the whole of Nepal. A description of the uncertainties involved would be helpful.
The fragility curves presented in Figure 3 show that even with extreme levels of ground shaking over 1.5 g many building types do not result in complete damage in the low Case (e.g. unreinforced fieldstone not reaching more than 60% probability of complete damage with PGA of 3 g). Unreinforced masonry with cement mortar seems to be more vulnerable than mud mortar in the higher probability range. This is also counterintuitive, and requires further explanation, as also the large deviation of the lower curve.
The use of a static susceptibility model which is trained on the landslide inventory caused by the Gorkha earthquake, without including the earthquake shaking as a causal factor, might be problematic.
The simulation takes quite a few shortcuts or assumptions, which are understandable given the limited data availability. The threshold used for defining the slope units with landslides based on the relation between PGA and landslide occurrence during the Gorkha earthquake doesn’t take into account the terrain conditions.
276-278: Can you explain how you can sum up the probabilities of the individual buildings per slope unit / administrative unit to obtain the number of destroyed buildings? E.g. if you have 100 buildings, each with a 50% probability of failure, do you then have 50 destroyed buildings?
280-291 When you assess the relation between PGA and landslides on the basis of the Gorkha earthquake inventory, will the results then not mimic the situation of the Gorkha earthquake? Also if the relation between PGA and landslides does not take into account other covariates, would this not have the effect that slope units that are not very steep but a high PGA are still ”activated” ? the sentence “That probability, in turn, is compared with a uniform random deviate to determine whether each slope unit is activated or not” is not clear and could be explained better.
295 “We first check if a landslide occurred within the slope unit” Why was this done? Is the analysis then not biased toward replicating the landslide inventory?
299 How is this uniform random deviate (B) determined?
303-305: if within a slope unit potentially landslides are triggered, how do you then determine how many buildings and roads would be impacted? Here again, you use a random value. The use of these random values in the method is not clear.
316-318: you create 10000 scenarios but these are all related to the Gorkha earthquake?
387-389: What are the reasons for the over-and-under prediction? You are discussing these in the discussion section, but have you tried to reduce this overprediction by adjusting certain components?
Figure 5: the red contours are poorly visible. You might want to show them in a separate figure, and not repeat them in each map.
Figure 6 is quite complex and is not focusing on the main topic of the paper. It is replicating single hazard earthquake building losses for a scenario earthquake. There are quite some outliers in this graph with large differences between observed and predicted. Isn’t it logical that for most of the districts the results fall in between the low and high values?
Figure 7: The density of information in these maps is very high, and that makes them difficult to interpret. The actual landslide impacts are often overprinted by the other information, especially in A and D. PGA contours could be left out, and the color scale adjusted as most values are blue and it is not possible to see if red areas are due to the crosses or to the map values.
459-462: could the weaker relation for earthquake-induced landslides not be due to the separation of the triggering PGA values to activate the slope unit, and the separate susceptibility values that were not considering PGA values?
Figures 8 and 9: even though the overprediction of building damage is on the order of 50-100 times, and road damage of 20-25, the AUC values seem to be very good. Is this not caused by the many administrative units that didn’t have damage at all? Or is it simply predicting damage/no damage per administrative unit? In that case, the figure is not so meaningful.
In the discussion, it is mentioned that the modeling was only done for the 2015 Gorkha earthquake that occurred on 25 April 2015, excluding the event of 12 May 2015. How did you separate the landslides caused by these two earthquakes? I understood that due to the fact that they occurred close to each other, the mapping of the co-seismic landslides could not differentiate well between the landslides caused by both events.
For the modeling of earthquake-induced landslides it might be considered to apply Spatio-temporal data-driven modeling (e.g. Dahal, A., Tanyas, H., van Westen, C., van der Meijde, M., Mai, P. M., Huser, R. and Lombardo, L. (2024c) Space-time landslide hazard modeling via ensemble neural networks. Natural Hazards and Earth System Sciences 24(3), 823–845. It might be good to address this a bit more in the discussion section.
In how far do the interpolated PGA values correctly represent the effect of topography, and would PGA values be the best predictor for landslide occurrence alone (See also: Dahal, A., Tanyaş, H. and Lombardo, L. (2024b) Full seismic waveform analysis combined with transformer neural networks improves coseismic landslide prediction. Communications Earth & Environment 5(1), 75.)
The reference to Fan et al. 2019 is missing in the reference list.
Citation: https://doi.org/10.5194/egusphere-2024-1374-RC2 -
AC2: 'Reply on RC2', Alexandre Dunant, 29 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1374/egusphere-2024-1374-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Alexandre Dunant, 29 Aug 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-1374', Anonymous Referee #1, 09 Jul 2024
Dear Authors,
I was invited to review the Manuscript Number: egusphere-2024-1374 “Impacts from cascading multi-hazards using hypergraphs: a case study from the 2015 Gorkha earthquake in Nepal”.
The use of graphs is certainly very interesting for exploring interactions between multi-hazards. The challenge of applying such a method on a large national scale is overcome by the proposed hypergraph approach. The main benefit derived is efficiency. The work is clearly and well-presented in its overall logic. The research question, case study, and methodology provide the necessary information to appreciate the approach in its generality. However, some passages are unclear and require minor additional information, as detailed below.
I would like to highlight a general aspect that I believe needs further clarification: are the limitations of this approach attributed to the use of hypergraphs or to the individual models used to model specific hazards (e.g., fragility functions, estimation of susceptibility maps, etc.)? In my understanding, the limitations are due to the choice of the latter. If so, I think it is important to clarify this in the discussion and propose alternatives for future implementations that could improve these limitations. Additionally, what are the advantages of using hypergraphs beyond the computational efficiency that makes them applicable on a large scale?
The graph methodologies in risk assessment allow, among other things, the analysis of graph topology to highlight potential systemic behavior and impact propagation mechanisms (see as example ref 1). Is this possible with hypergraphs? I invite the authors to consider to discuss these potential applications or limitations. My question upon reading the novelty of the manuscript is whether hypergraphs are an innovative algorithm extending traditional risk of multi-hazard methodologies (beyond the multi-layer single hazard) to larger scales (thanks to their efficiency) or if they introduce a conceptually different approach to impact estimation? Please clarify this aspect in the discussion section
Detailed aspects include:
- Lines 284-292 are unclear; please reformulate with more explanations for clarity.
- The susceptibility section was too hasty, particularly the process of identifying "slope units" and the relationship between "slope units" and the extension of the landslide. Additionally, it is unclear whether the buildings and roads affected by landslides are only those falling within the "slope units" or if there is some estimation of the landslide's influence area. In either case, further explanation is needed.
- A table summarizing the data used, specifying the main characteristics, including the different resolutions used could help the reading.
- Figures 5 and 7 use a continuous scale for discrete colors, which is not intuitive. I suggest to explore other legend options.
- The quality of the figures is low, which may be a pre-print issue. I suggest to check before the final version.
***
ref 1: Arosio, M., Martina, M.L.V., Figueiredo, R., “The whole is greater than the sum of its parts: A holistic graph-based assessment approach for natural hazard risk of complex systems.”, Natural Hazards and Earth System Sciences, 2020, 20(2), pp. 521–547
Citation: https://doi.org/10.5194/egusphere-2024-1374-RC1 -
AC1: 'Reply on RC1', Alexandre Dunant, 29 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1374/egusphere-2024-1374-AC1-supplement.pdf
-
RC2: 'Comment on egusphere-2024-1374', Cees van Westen, 17 Jul 2024
Dear Authors,
I was invited to review your very interesting paper. The paper presents a hyperedge model approach for multi-hazard risk assessment, which is innovative and worth publishing. The main advantage of the method is the increased speed of calculation, and the possibility to generate many ensembles incorporating the uncertainty of the risk components.
Concerning the computational advantages of this model, I would have liked to see more information on the way of calculating using hypergraphs, in terms of software, platforms, and calculation speed.
The model is demonstrated on a national scale in Nepal, with the aim to evaluate the damage to individual buildings and road segments. Yet the input data is gridded, the analysis is done on slope units, and the specific exposure of individual buildings cannot be assessed, as well as the runout of landslides. It is not clear to me why you wanted to go to this building level for a national scale analysis.
Whereas the hyperedge framework has a large potential for multi-hazard risk assessment, the multi-hazard modeling component in this paper is still rather modest. The only interaction that is considered is related to earthquake-induced landslides. Other possible follow-up cascading events, such as landslide dams, and their break up, or debris flows are not considered. The potential applicability of the proposed method for such more advanced interactions is mentioned in the discussion section, but not further worked out. This would be a nice topic for a follow-up publication.
The paper demonstrates its applicability to simulate the earthquake damage to buildings, and the earthquake-induced landslide damage to buildings and roads for the 2015 Gorkha event. The model is trained using the 2015 Gorkha event landslide inventory, so it is in a way to be expected that the spatial patterns resemble the actual damage patterns.
Figure 1. Figure 1A is mostly a single hazard (although considering earthquake-induced landslides) and is also of rather poor quality. What if you would combine earthquakes with flooding, then the slope unit approach would not be appropriate? The concept of hypergraph is not clear from Figure 1b. Why are there three hypergraphs and not one or two in this example? What determines the number?
The method doesn’t seem to analyze the direct damage due to ground shaking to roads. Why is that not considered?
Figure 2: the multi-hazard interactions when including rainfall are not addressed clearly in this figure. Rainfall-induced flooding will not be suitable to consider at a slope unit level. Debris flows also may cover several units. And the other multi-hazard interactions (e.g. landslide dams) and post-earthquake reactivation of landslides are not considered sufficiently in this figure. Also not that the exact interaction between landslides and infrastructure is not considered. Perhaps you could draw some hypothetical slope units with roads and buildings, and show which components are in the model and which are not?
As you mentioned in the discussion section the model might not work for debris flows that reach valleys, and to settlements and roads that are located at the outlet of valleys, as the model only considers slope units.
It would be relevant to explain how the building dataset from METEOR was generated and how this could include the construction types and building values for the whole of Nepal. A description of the uncertainties involved would be helpful.
The fragility curves presented in Figure 3 show that even with extreme levels of ground shaking over 1.5 g many building types do not result in complete damage in the low Case (e.g. unreinforced fieldstone not reaching more than 60% probability of complete damage with PGA of 3 g). Unreinforced masonry with cement mortar seems to be more vulnerable than mud mortar in the higher probability range. This is also counterintuitive, and requires further explanation, as also the large deviation of the lower curve.
The use of a static susceptibility model which is trained on the landslide inventory caused by the Gorkha earthquake, without including the earthquake shaking as a causal factor, might be problematic.
The simulation takes quite a few shortcuts or assumptions, which are understandable given the limited data availability. The threshold used for defining the slope units with landslides based on the relation between PGA and landslide occurrence during the Gorkha earthquake doesn’t take into account the terrain conditions.
276-278: Can you explain how you can sum up the probabilities of the individual buildings per slope unit / administrative unit to obtain the number of destroyed buildings? E.g. if you have 100 buildings, each with a 50% probability of failure, do you then have 50 destroyed buildings?
280-291 When you assess the relation between PGA and landslides on the basis of the Gorkha earthquake inventory, will the results then not mimic the situation of the Gorkha earthquake? Also if the relation between PGA and landslides does not take into account other covariates, would this not have the effect that slope units that are not very steep but a high PGA are still ”activated” ? the sentence “That probability, in turn, is compared with a uniform random deviate to determine whether each slope unit is activated or not” is not clear and could be explained better.
295 “We first check if a landslide occurred within the slope unit” Why was this done? Is the analysis then not biased toward replicating the landslide inventory?
299 How is this uniform random deviate (B) determined?
303-305: if within a slope unit potentially landslides are triggered, how do you then determine how many buildings and roads would be impacted? Here again, you use a random value. The use of these random values in the method is not clear.
316-318: you create 10000 scenarios but these are all related to the Gorkha earthquake?
387-389: What are the reasons for the over-and-under prediction? You are discussing these in the discussion section, but have you tried to reduce this overprediction by adjusting certain components?
Figure 5: the red contours are poorly visible. You might want to show them in a separate figure, and not repeat them in each map.
Figure 6 is quite complex and is not focusing on the main topic of the paper. It is replicating single hazard earthquake building losses for a scenario earthquake. There are quite some outliers in this graph with large differences between observed and predicted. Isn’t it logical that for most of the districts the results fall in between the low and high values?
Figure 7: The density of information in these maps is very high, and that makes them difficult to interpret. The actual landslide impacts are often overprinted by the other information, especially in A and D. PGA contours could be left out, and the color scale adjusted as most values are blue and it is not possible to see if red areas are due to the crosses or to the map values.
459-462: could the weaker relation for earthquake-induced landslides not be due to the separation of the triggering PGA values to activate the slope unit, and the separate susceptibility values that were not considering PGA values?
Figures 8 and 9: even though the overprediction of building damage is on the order of 50-100 times, and road damage of 20-25, the AUC values seem to be very good. Is this not caused by the many administrative units that didn’t have damage at all? Or is it simply predicting damage/no damage per administrative unit? In that case, the figure is not so meaningful.
In the discussion, it is mentioned that the modeling was only done for the 2015 Gorkha earthquake that occurred on 25 April 2015, excluding the event of 12 May 2015. How did you separate the landslides caused by these two earthquakes? I understood that due to the fact that they occurred close to each other, the mapping of the co-seismic landslides could not differentiate well between the landslides caused by both events.
For the modeling of earthquake-induced landslides it might be considered to apply Spatio-temporal data-driven modeling (e.g. Dahal, A., Tanyas, H., van Westen, C., van der Meijde, M., Mai, P. M., Huser, R. and Lombardo, L. (2024c) Space-time landslide hazard modeling via ensemble neural networks. Natural Hazards and Earth System Sciences 24(3), 823–845. It might be good to address this a bit more in the discussion section.
In how far do the interpolated PGA values correctly represent the effect of topography, and would PGA values be the best predictor for landslide occurrence alone (See also: Dahal, A., Tanyaş, H. and Lombardo, L. (2024b) Full seismic waveform analysis combined with transformer neural networks improves coseismic landslide prediction. Communications Earth & Environment 5(1), 75.)
The reference to Fan et al. 2019 is missing in the reference list.
Citation: https://doi.org/10.5194/egusphere-2024-1374-RC2 -
AC2: 'Reply on RC2', Alexandre Dunant, 29 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1374/egusphere-2024-1374-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Alexandre Dunant, 29 Aug 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
390 | 150 | 43 | 583 | 44 | 39 | 25 |
- HTML: 390
- PDF: 150
- XML: 43
- Total: 583
- Supplement: 44
- BibTeX: 39
- EndNote: 25
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1