the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The concept of event-size dependent exhaustion and its application to paraglacial rockslides
Abstract. Rockslides are a major hazard in mountainous regions. In formerly glaciated regions, the disposition mainly arises from oversteepened topography and decreases through time. However, little is known about this decrease and thus about the present-day hazard of huge, potentially catastrophic rockslides. This paper presents a new theoretical concept that combines the decrease in disposition with the power-law distribution of rockslide volumes found in several studies. The concept starts from a given initial set of potential events, which are randomly triggered through time at a probability that depends on event size. The developed theoretical framework is applied to paraglacial rockslides in the European Alps, where available data allow for constraining the parameters reasonably well. The results suggest that the probability of triggering increases roughly with the cube root of the volume. For small rockslides up to 1000 m3, an exponential decrease of the frequency with an e-folding time longer than 65,000 yr is predicted. In turn, the predicted e-folding time is shorter than 2000 yr for volumes of 10 km3, so that the occurrence of such huge rockslides is unlikely at present times. For the largest rockslide possible at present times, a median volume of 0.5 to 1 km3 is predicted. With a volume of 0.27 km3, the artificially triggered rockslide that hit the Vaiont reservoir in 1963, is thus not extraordinarily large. Concerning its frequency of occurrence, however, it can be considered a 700 to 1200-year event.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1344 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1344 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-1272', Maria Teresa Brunetti, 24 Feb 2023
Overall, the paper is well written, but I would recommend providing more details to facilitate understanding of the mathematical steps that, although formally correct, are not so obvious, at least to a reader of NHESS. Steps need to be clearly explained even at the cost of some verbosity.
In addition, given the diverse expertise of potential readers, a more extensive explanation of the physical meanings of the variables used in individual equations would be appreciated.
Regarding the theoretical framework, an event size-dependent depletion in analogy with the case of wildfires (Drossel-Schwabl forest-fire model) is innovative and interesting, but would require a stronger phenomenological/physical hypothesis. It is worth explaining why should larger landslides behave like burned areas, the extent of which depends on the interaction of individual adjacent spatial units.
Minor revisions are in the attached file.
-
AC1: 'Reply on RC1', Stefan Hergarten, 16 May 2023
Dear Maria Teresa Brunetti,
thank you very much for your constructive comments! Before addressing all points in a revised version, let me briefly comment on the most important points.
Of course, I can add more explanations to the mathematical framework. The potential problem is that I have to use your comments and those
of the second reviewer as a guideline. You will probably be satisfied with the explantation, but in the worst case, it is still very challenging
for the majority of the NHESS readers. Anyway, I will try to make the mathematics more accessible.
The second point is about the transfer from the simple Drossel-Schwabl forest fire model to landslides. You raise the question why the idea that the probability of failure of any object (cluster of trees or potential rockslide) depends on its size can be transferred from the forest fire model (where it is clear) to rockslides. I would argue that a potential dependence on size is the default assumption, while a probability independent of the size is specific and would need to be justified. In this context, I think that a power-law dependence with an adjustable exponent is a rather general approach since a size-independent probability would also be captured (exponent = 0). Anyway, I can add a second part to the motivation section based on my old rockslide model from 2012. Although this model is not necessarily realistic, it may help to illustrate the idea.
Best regards,
StefanCitation: https://doi.org/10.5194/egusphere-2022-1272-AC1
-
AC1: 'Reply on RC1', Stefan Hergarten, 16 May 2023
-
RC2: 'Comment on egusphere-2022-1272', Anonymous Referee #2, 08 May 2023
The authors develop a theoretical framework to explain event size dependent exhaustion. They propose a mathematical formulation for the decline of big events capturing statistical behavior of self-organized criticality with a simple cellular automaton forest fore model (DS FFM) as an example. Next, the authors apply this framework to rockslides.
This is an interesting concept and I believe the paper can be of added value to the rockslide/landslide community but only after considering the following points:- The mathematical derivations at the heart of this manuscript are, at several points, difficult to follow. I provide some line comments below, but in a more general sense, the text would highly benefit from clear and structured explanations for every assumption made, equations proposed, and parameter values chosen.
- The model is tested/validated using a very limited dataset with observations. This is almost certainly an incomplete dataset restricted to a relatively small area (on a global scale). The implications of the limited dataset used here should be discussed thoroughly and implications for model accuracy and robustness in general should be made. Also, a reflection should be made on the validity of this framework and SOC models in general. Are rockfalls following self-organized criticality and why is that? Is the mechanism that explains rockfall size similar to the mechanisms that explain forest growth and decay. Some reflection on spatial correlation of rock properties is needed here. Is there potential to scale this exercise up to larger (global) scales?
- The mathematical framework is proposed as a useful test for larger scale models such as a model published by the same author. Explicit comparison of modelled rockfall (Hergarten, 2012) with this framework in a dedicated section would illustrate this concept more clearly.
Line 27: can be excluded for these two rockfalls
Line 54: Do you mean that a system which exhibits SOC develops an equilibrium between slowly but continuously evolving versus rapid discrete event-based processes? Please clarify
Line 77: nearest-neighbor connections: does this imply only cardinal cells on a square lattice?
Line 96: The overall behavior of the DS FFM simulations is compelling, but I am wondering (i) why the excess occurs at 1e5 and (ii) why the rapid decline and deviation from the power law scaling relationship occurs after. I do not agree it is not relevant here. The fact that this figure will be published warrants explanation for this deviation, regardless of whether it is relevant for the subsequent rockfall analysis. Has this to do with the dimensions of the grid or boundary conditions?
Line 110 describe lambda (add symbol in sentence above)
Line 113: here and at several points: it’s always better to guide the reader a little through the math. Makes the manuscript way more accessible. Here e.g. mention integration and boundary conditions to solve eq. 3
Line 118: Not clear how this is a cumulative formulation. In the given formulation, psi declines for increasing values for s. For a cumulative pareto function, I would expect something of the form 1 – (s_max/s)^alpha. Explain this better. Also, for these and following variables mention realistic parameter value ranges (in the context of rockfall)
Line 120 : definition n is not clear
Line 151: “μ (Eq. 1) is the decay constant” This is not clear.
Line 154: What should that be expected?
Line 155-156: This should be derived and explained much clearer. It is not clear at all to me why this results in respectively 2/3 and 1/3.
Line 162: This is almost certainly an incomplete dataset. How is this influencing the calibration of the model?
Line 178. The derivation of the maximum likelihood is essential to understand figure 4. I recommend including the appendix in the main manuscript. Also on Figure 4: alpha_v is a function gamma (eq. 15). It is unclear how alpha _v and gamma are plotted as (independent?) variables. How is gamma varied for the same value of alpha _v? By changing alpha?
Line 319: why is the likelihood given by the Poisson distribution?
Line 192: not clear how the t0 lines are obtained. More detail is highly needed here.
Line 214: again, does gamma depend on alpha_v (eq 15?)
Line 216: yet another distribution. Explain why this distribution is used. Also explain the symbols…
Line 216 – Line 222. I do not understand what the author does here. Please break this down into clear and understandable pieces, referring to the symbols used in the distribution equation.
Line 234-235: that is, if this model that is calibrated with relatively few datapoints is true.
Line 249 Despite
Line 250: See earlier comments on line 155
Line 254: This is interesting. I recommend giving an explicit example where the author’s earlier modelling work is evaluated using this theoretical framework e.g. for one specific site.
Line 266: True: which is why a clear explanation on how the t0 lines were derived is needed.
Line 275: is this because the topographic configuration does not allow for large rockfalls to materialize?
Line 298: I highly recommend illustrating this with a specific example, see also comment above.Citation: https://doi.org/10.5194/egusphere-2022-1272-RC2 -
AC2: 'Reply on RC2', Stefan Hergarten, 16 May 2023
Dear Reviewer,
thank you very much for your constructive comments! Before addressing all points in a revised version, let me briefly comment on the most important points.
Of course, I can add more explanations to the mathematical framework. The potential problem is that I have to use your comments and those of the first reviewer as a guideline. You will probably be satisfied with the explantation, but in the worst case, it is still very challenging for the majority of the NHESS readers. Anyway, I will try to make the mathematics more accessible.
As a second point, you mention my old (2012) rockslide model. In an early version of the manuscript, I indeed used this model as a second example for motivating the concept. I removed this part since a was afraid that a reviewer might pick one specific property of the model for tearing down the paper. Since this seems not to be the case, I will be happy to devote a short section to this model and illustrate that it qualitatively follows the theoretical framework proposed here. A quantitative analysis would, however, be too complicated since the rockslide model predicts overlapping events of different sizes.
As a third aspect, you mention the small and probably incomplete dataset on the European Alps. Owing to my lack of expertise on rockslide data from other regions, I would not dare to calibrate the model for other mountain ranges at the moment and also not to the worldwide scale, although this would be interesting. Concerning the small dataset, however, things are more complex. Rank-ordering statistics (so the largest or second-largest event still possible at a given time) differ strongly from statistics of individual events. In principle, knowing the largest event provides much more information than a randomly picked event. So the dataset is indeed small, but not as small as it may seem. In order to address the issue of potential incompleteness, I introduced the two alternative scenarios, which take into account that there might still be a huge, undetected rockslide. Perhaps I could point out this aspect a bit more clearly.
Best regards,
StefanCitation: https://doi.org/10.5194/egusphere-2022-1272-AC2
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-1272', Maria Teresa Brunetti, 24 Feb 2023
Overall, the paper is well written, but I would recommend providing more details to facilitate understanding of the mathematical steps that, although formally correct, are not so obvious, at least to a reader of NHESS. Steps need to be clearly explained even at the cost of some verbosity.
In addition, given the diverse expertise of potential readers, a more extensive explanation of the physical meanings of the variables used in individual equations would be appreciated.
Regarding the theoretical framework, an event size-dependent depletion in analogy with the case of wildfires (Drossel-Schwabl forest-fire model) is innovative and interesting, but would require a stronger phenomenological/physical hypothesis. It is worth explaining why should larger landslides behave like burned areas, the extent of which depends on the interaction of individual adjacent spatial units.
Minor revisions are in the attached file.
-
AC1: 'Reply on RC1', Stefan Hergarten, 16 May 2023
Dear Maria Teresa Brunetti,
thank you very much for your constructive comments! Before addressing all points in a revised version, let me briefly comment on the most important points.
Of course, I can add more explanations to the mathematical framework. The potential problem is that I have to use your comments and those
of the second reviewer as a guideline. You will probably be satisfied with the explantation, but in the worst case, it is still very challenging
for the majority of the NHESS readers. Anyway, I will try to make the mathematics more accessible.
The second point is about the transfer from the simple Drossel-Schwabl forest fire model to landslides. You raise the question why the idea that the probability of failure of any object (cluster of trees or potential rockslide) depends on its size can be transferred from the forest fire model (where it is clear) to rockslides. I would argue that a potential dependence on size is the default assumption, while a probability independent of the size is specific and would need to be justified. In this context, I think that a power-law dependence with an adjustable exponent is a rather general approach since a size-independent probability would also be captured (exponent = 0). Anyway, I can add a second part to the motivation section based on my old rockslide model from 2012. Although this model is not necessarily realistic, it may help to illustrate the idea.
Best regards,
StefanCitation: https://doi.org/10.5194/egusphere-2022-1272-AC1
-
AC1: 'Reply on RC1', Stefan Hergarten, 16 May 2023
-
RC2: 'Comment on egusphere-2022-1272', Anonymous Referee #2, 08 May 2023
The authors develop a theoretical framework to explain event size dependent exhaustion. They propose a mathematical formulation for the decline of big events capturing statistical behavior of self-organized criticality with a simple cellular automaton forest fore model (DS FFM) as an example. Next, the authors apply this framework to rockslides.
This is an interesting concept and I believe the paper can be of added value to the rockslide/landslide community but only after considering the following points:- The mathematical derivations at the heart of this manuscript are, at several points, difficult to follow. I provide some line comments below, but in a more general sense, the text would highly benefit from clear and structured explanations for every assumption made, equations proposed, and parameter values chosen.
- The model is tested/validated using a very limited dataset with observations. This is almost certainly an incomplete dataset restricted to a relatively small area (on a global scale). The implications of the limited dataset used here should be discussed thoroughly and implications for model accuracy and robustness in general should be made. Also, a reflection should be made on the validity of this framework and SOC models in general. Are rockfalls following self-organized criticality and why is that? Is the mechanism that explains rockfall size similar to the mechanisms that explain forest growth and decay. Some reflection on spatial correlation of rock properties is needed here. Is there potential to scale this exercise up to larger (global) scales?
- The mathematical framework is proposed as a useful test for larger scale models such as a model published by the same author. Explicit comparison of modelled rockfall (Hergarten, 2012) with this framework in a dedicated section would illustrate this concept more clearly.
Line 27: can be excluded for these two rockfalls
Line 54: Do you mean that a system which exhibits SOC develops an equilibrium between slowly but continuously evolving versus rapid discrete event-based processes? Please clarify
Line 77: nearest-neighbor connections: does this imply only cardinal cells on a square lattice?
Line 96: The overall behavior of the DS FFM simulations is compelling, but I am wondering (i) why the excess occurs at 1e5 and (ii) why the rapid decline and deviation from the power law scaling relationship occurs after. I do not agree it is not relevant here. The fact that this figure will be published warrants explanation for this deviation, regardless of whether it is relevant for the subsequent rockfall analysis. Has this to do with the dimensions of the grid or boundary conditions?
Line 110 describe lambda (add symbol in sentence above)
Line 113: here and at several points: it’s always better to guide the reader a little through the math. Makes the manuscript way more accessible. Here e.g. mention integration and boundary conditions to solve eq. 3
Line 118: Not clear how this is a cumulative formulation. In the given formulation, psi declines for increasing values for s. For a cumulative pareto function, I would expect something of the form 1 – (s_max/s)^alpha. Explain this better. Also, for these and following variables mention realistic parameter value ranges (in the context of rockfall)
Line 120 : definition n is not clear
Line 151: “μ (Eq. 1) is the decay constant” This is not clear.
Line 154: What should that be expected?
Line 155-156: This should be derived and explained much clearer. It is not clear at all to me why this results in respectively 2/3 and 1/3.
Line 162: This is almost certainly an incomplete dataset. How is this influencing the calibration of the model?
Line 178. The derivation of the maximum likelihood is essential to understand figure 4. I recommend including the appendix in the main manuscript. Also on Figure 4: alpha_v is a function gamma (eq. 15). It is unclear how alpha _v and gamma are plotted as (independent?) variables. How is gamma varied for the same value of alpha _v? By changing alpha?
Line 319: why is the likelihood given by the Poisson distribution?
Line 192: not clear how the t0 lines are obtained. More detail is highly needed here.
Line 214: again, does gamma depend on alpha_v (eq 15?)
Line 216: yet another distribution. Explain why this distribution is used. Also explain the symbols…
Line 216 – Line 222. I do not understand what the author does here. Please break this down into clear and understandable pieces, referring to the symbols used in the distribution equation.
Line 234-235: that is, if this model that is calibrated with relatively few datapoints is true.
Line 249 Despite
Line 250: See earlier comments on line 155
Line 254: This is interesting. I recommend giving an explicit example where the author’s earlier modelling work is evaluated using this theoretical framework e.g. for one specific site.
Line 266: True: which is why a clear explanation on how the t0 lines were derived is needed.
Line 275: is this because the topographic configuration does not allow for large rockfalls to materialize?
Line 298: I highly recommend illustrating this with a specific example, see also comment above.Citation: https://doi.org/10.5194/egusphere-2022-1272-RC2 -
AC2: 'Reply on RC2', Stefan Hergarten, 16 May 2023
Dear Reviewer,
thank you very much for your constructive comments! Before addressing all points in a revised version, let me briefly comment on the most important points.
Of course, I can add more explanations to the mathematical framework. The potential problem is that I have to use your comments and those of the first reviewer as a guideline. You will probably be satisfied with the explantation, but in the worst case, it is still very challenging for the majority of the NHESS readers. Anyway, I will try to make the mathematics more accessible.
As a second point, you mention my old (2012) rockslide model. In an early version of the manuscript, I indeed used this model as a second example for motivating the concept. I removed this part since a was afraid that a reviewer might pick one specific property of the model for tearing down the paper. Since this seems not to be the case, I will be happy to devote a short section to this model and illustrate that it qualitatively follows the theoretical framework proposed here. A quantitative analysis would, however, be too complicated since the rockslide model predicts overlapping events of different sizes.
As a third aspect, you mention the small and probably incomplete dataset on the European Alps. Owing to my lack of expertise on rockslide data from other regions, I would not dare to calibrate the model for other mountain ranges at the moment and also not to the worldwide scale, although this would be interesting. Concerning the small dataset, however, things are more complex. Rank-ordering statistics (so the largest or second-largest event still possible at a given time) differ strongly from statistics of individual events. In principle, knowing the largest event provides much more information than a randomly picked event. So the dataset is indeed small, but not as small as it may seem. In order to address the issue of potential incompleteness, I introduced the two alternative scenarios, which take into account that there might still be a huge, undetected rockslide. Perhaps I could point out this aspect a bit more clearly.
Best regards,
StefanCitation: https://doi.org/10.5194/egusphere-2022-1272-AC2
Peer review completion
Journal article(s) based on this preprint
Model code and software
Event-size dependent exhaustion and paraglacial rockslides Stefan Hergarten https://doi.org/10.5281/zenodo.7313868
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
227 | 84 | 18 | 329 | 7 | 5 |
- HTML: 227
- PDF: 84
- XML: 18
- Total: 329
- BibTeX: 7
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Stefan Hergarten
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1344 KB) - Metadata XML