the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A quantitative module of avalanche hazard—comparing forecaster assessments of storm and persistent slab avalanche problems with information derived from distributed snowpack simulations
Abstract. Avalanche forecasting is a human judgment process with the goal of describing the nature and severity of avalanche hazard based on the concept of distinct avalanche problems. Snowpack simulations can help improve forecast consistency and quality by extending qualitative frameworks of avalanche hazard with quantitative links between weather, snowpack, and hazard characteristics. Building on existing research on modeling avalanche problem information, we present the first spatial modeling framework for extracting the characteristics of storm and persistent slab avalanche problems from distributed snowpack simulations. Grouping of simulated layers based on regional burial dates allows us to track them across space and time and calculate insightful spatial distributions of avalanche problem characteristics.
We applied our approach to ten winter seasons in Glacier National Park, Canada, and compared the numerical predictions to human hazard assessments. Despite good agreement in the seasonal summary statistics, the comparison of the daily assessments of avalanche problems revealed considerable differences between the two data sources. The best agreements were found in the presence and absence of storm slab avalanche problems and the likelihood and expected size assessments of persistent slab avalanche problems. Even though we are unable to conclusively determine whether the human or model data set represents reality more accurately when they disagree, our analysis indicates that the current model predictions can add value to the forecasting process by offering an independent perspective. For example, the numerical predictions can provide a valuable tool for assisting avalanche forecasters in the difficult decision to remove persistent slab avalanche problems. The value of the spatial approach is further highlighted by the observation that avalanche danger ratings were better explained by a combination of various percentiles of simulated instability and failure depth than by simple averages or proportions. Our study contributes to a growing body of research that aims to enhance the operational value of snowpack simulations and provides insight into how snowpack simulations can help address some of the operational challenges of human avalanche hazard assessments.
- Preprint
(2440 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-871', Zachary Miller, 24 May 2024
Overview:
The manuscript titled “A quantitative module of avalanche hazard—comparing forecaster assessments of storm and persistent slab avalanche problems with information derived from distributed snowpack simulations” sets out to improve avalanche forecasting quality through the development of an additional toolset and analyzes the effectiveness of this toolset in this paper. The authors leverage recent scaling developments in the utilization of the SNOWPACK model to produce spatially distributed snow cover outputs over ten winter seasons for the Glacier National Park, Canada. They then post-process this data to produce numerical predictions of the characteristics of storm snow and persistent avalanche problems and compare those results against the timeseries of daily human hazard assessments. Their comparison is extensive and thorough, both evaluating broad trends and specific event day-to-day evolutions of avalanche problems. They describe their methods clearly despite the relative complexity required in post processing and inter-comparison of their datasets. The results and discussion clearly establish where their work fits within the current sphere of research and the contribution their methods offer to the snow and avalanche science community.
I do not have any major issues with this manuscript and feel that it is of very high quality. The largest question raised is whether or not they considered comparing their two hazard assessments (simulated and human) against observed avalanche records? I realize that the additional effort is probably outside the scope of the current project and that there are known limitations to observational records but also believe that their specific domain - Glacier National Park, Canada - has potentially one of the most complete and thorough records available due to the high level of professional avalanche activity in the terrain in and around the park. These records, and additionally utilizing the Avalanche Hazard Index (Schaerer, 1989), could provide a relative “truth” in the avalanche hazard characteristics being compared. I would appreciate a response to my question but do not feel that this further analysis is required for publication given the robustness of the work presented.
Specific technical corrections and comments:
Line 100 – use of word “over” is confusing within the discussion of the ordinal likelihood of avalanches scale, simply remove to clarify
Line 106 – use of word “over” is confusing within the discussion of the ordinal North American Public Avalanche Danger Scale, simply remove to clarify
Line 119 – “layers that were exposed to the snow surface” is confusing, perhaps adjust simply to “layers that were the exposed snow surface”
Line 120 – “represent rain events that form a crust” implies that rain is the only way surface crusts form and the date tags probably include additional crust formation events. Perhaps something like: “represent crust formation events at the snow surface such as rain or insolation-driven melting.”
Line 180 – It seems as though the likelihood of avalanches should be represented by the layer with the highest punstable value or an average for that profile rather than the lowest?
Figure 5g – Color and interquartile range of air temperature line is the same as punstable in plots 5c-5f and, therefore, is slightly confusing. Consider changing it to dashed or a different color to differentiate it.
Figure 5h – Color of median high of new snow is the same as storm slab instabilities in plots 5d and 5f, and, therefore, is slightly confusing. Consider changing the color to differentiate it.
Figure 6 – It appears but is not explicitly described that assessed values are solid and modelled are hatched. Perhaps mention this in the figure description or add a legend.
Line 299 – “increased strongly” doesn’t make sense in reference to the simulated depth of the weak layer, perhaps remove “strongly” or change wording to “increased substantially”
Line 304 – “short and moderate peaks of modeled instability” is confusing since “moderate” is not defined within the spectrum of modeled instability
Line 331 – “as” is meant to be “was”
Figure 8c & 8d – “Turn-on” and “Turn-off” are confusing category names and I am interpreting it them as if the avalanche problem was assessed (“Yes”/“No”) or added/removed?
Line 359 – “is unstable” should be “are unstable”
Line 361-363 – You mention that deeper splits show a generally higher hazard rating associated with storm snow problems than persistent problems. Is this due to the relative frequency of lower hazard ratings associated persistent problems (aka “spicy moderate” or existing for weeks) vs. the short-term spiking hazard commonly found with storm snow problems (quick to rise and quick to fall)? If so, I believe it is worth clarifying the relative temporal effects of both types of problems because your results discussion seems to say that the model simply pairs lower hazard with persistent problems.
Line 380-381 – Delete sentence since it seems you are submitting the manuscript for publication in a peer-reviewed journal and I hope you’ve taken those additional research steps.
Line 423-425 – Very concise distillation of the effectiveness of your model – nice job.
Line 444-445 – How do forecaster’s removal of reported persistent problems appear arbitrary? That is a big statement to make given the multitude of factors forecasters must balance to make that call.
Line 545 – Understanding the truth of avalanche hazard is infinitely complex. Has your team considered comparing your results with observed avalanche activity to quantify the accuracy of avalanche depths/size and loosely distribution (despite the inherent bias in physically observed avalanche records)? I wonder if this could help clarify a relative truth especially when the simulated and reported hazards differ?
Final Thoughts:
I reiterate that with minor updates this paper will be a valuable addition to the avalanche forecasting and snow science communities.
Citation: https://doi.org/10.5194/egusphere-2024-871-RC1 -
CC1: 'Comment on egusphere-2024-871', Frank Techel, 05 Aug 2024
Dear authors,I greatly appreciate your comprehensive comparison between model predictions and human assessments.When reading through your manuscript, several points didn't become fully clear or were a little confusing. These all relate to the Methods described in Sections 3.2 and 3.3.Please find some feedback and questions below.I hope these comments and questions are helpful,Frank Techel********************************************Specific comments and questions
- While explaining the link between the distribution of p_unstable and the likelihood of avalanches (in CMAH) makes sense (Sect. 3.1), consider referring to p_unstable rather than the likelihood of avalanches when referring to p_unstable (e.g., L174). This would ease understanding when referring to model predictions and when to human assessments.
- Along that same line, you use expected p_unstable for the first time on L211. I presume it is meant to be introduced on L191(?) though it is referred to as the expected likelihood of avalanches. Only later I noticed that in Figure 4a, the expected p_unstable values are shown but this is nowhere mentioned (or I missed it). From Fig. 4a, I took that the expected p_unstable is the mean of all the p_unstable-values in the plot. On L189 you refer to the likelihood of avalanches (but I presume this is again p_unstable) for which you derive various percentiles. Consider indicating that the 50th percentile is what you call expected p_unstable.
- L164: you say that you used the threshold p_unstable >= 0.77 to define layers with poor stability, as proposed by Mayer et al. (2022). But afterwards, you seem to analyze exclusively p_unstable; at least in all the figures p_unstable is shown. - Why did you primarily explore p_unstable and not the proportion p_unstable >= 0.77? I would expect that this explains why your distribution of 2 (moderate) was wider compared to Switzerland, while the distribution of 3 (considerable) was wider in Switzerland than in your data. (L404-406). You also say something along that line. --- Out of curiosity, while analyzing, did you plot Figures 7a, c, e, g and Figure 8g and h using the proportion unstable rather than the expected p_unstable?
- L178: I assume this is just a typo, it should probably read >= rather than <=?
- L186: I don't understand how the point cloud in Figure 4 can provide a spatial distribution. I am aware that spatial distribution is the term used for the number of triggering locations in CMAH. In this context, I found it rather confusing as the distribution in the plot doesn't have a spatial component. - Consider changing to something like "the number of potential triggering points within a region can be gauged from the distribution of p_unstable". At least to me, p_unstable provides primarily an indication of potential instability considering a Rutschblock test. Of the unstable locations, only a small fraction will be sufficiently unstable to result in human-triggered (or natural) avalanches (Mayer et al., 2023).
- L192: How do you proceed if none of the profiles fulfilled the p_unstable >= 0.77 - criteria for deriving the expected depth when no such layers were present?
- You mentioned twice that p_unstable correlated more strongly with the danger level than the likelihood terms (e.g., L311-312). This is interesting. - Is this maybe due to likelihood estimates being less reliably estimated by forecasters as compared to the reliability of danger level estimates? Or is this maybe linked to the fact that p_unstable is a mix of Rutschblock stability and danger level (p_unstable actually correlated more with danger level than RB stability, see Mayer et al., 2022 [p.4601])?
Citation: https://doi.org/10.5194/egusphere-2024-871-CC1 -
RC2: 'Comment on egusphere-2024-871', Veronika Hatvan, 19 Aug 2024
The manuscript titled "A quantitative module of avalanche hazard—comparing forecaster assessments of storm and persistent slab avalanche problems with information derived from distributed snowpack simulations" presents a well-executed study aimed at enhancing avalanche forecasting by integrating numerical snowpack simulations with human judgment. The study introduces a spatial approach to extracting the characteristics of storm and persistent slab avalanche problems from distributed snowpack simulations, tracing individual snowpack layers over time and space. This approach enables the calculation of spatial distributions of avalanche problem characteristics and presents the data in familiar hazard chart formats aligned with the (CMAH).
The study leverages snow cover data spanning ten winter seasons from Glacier National Park, Canada, to examine the agreement between snow cover simulations and human assessments for persistent and storm slab avalanche problems. The comparison is thorough, addressing both seasonal trends and day-to-day evaluations. The authors clearly describe their methods, which are up-to-date and well-suited to the study’s objectives. This work aligns well with recent advancements aimed at integrating snowpack modelling more closely with operational forecasting workflows.
The applied methods and developed approaches are of high quality and contribute significantly to the goal of further integrating snow cover modelling results into avalanche forecasting workflows. The comparison between modelling results and human assessments provides a valuable foundation and insights for future applications.
Minor Corrections and Comments:
Line 174: I noticed a small typo in the phrase 'characteristics avalanche problem type'; I assume it should be ' characteristic avalanche problem type'? Additionally, I concur with the comment by Frank Techel that, for clarity, it would be beneficial to consistently use the term p_unstable when referring to the model predictor. This distinction will help avoid (my) confusion, when differentiating it from human assessments (e.g., Line 174 and other locations).
Line 175 – 178 & 180 - 181: To me, it is unclear how the depth of the identified layer differs from the depth of the deepest unstable layer. To my understanding, these two are the same. Consider revising for more clarity, otherwise I would be interested in a reply to clarify this distinction.
Line 178 & 181: I assume this is a typo, and p_unstable ≤ 0.77 should instead read p_unstable ≥ 0.77.
Line 186: I do not understand the term 'spatial distribution' as it relates to Figure 4, as I don't see any spatial component represented in the figure. Consider revising this term for greater clarity.
Figure 5: For clarity, it would be helpful to choose different colours for air temperature and HN24, as the current colours are very similar to those used for p_unstable and avalanche problems in the upper panels. Additionally, I assume that air temperature and HN24 are based on modelled data rather than measurements? It might be beneficial to add a small comment on this for clarity.
Figure 6: It took multiple views to understand that the hashed bars represent modelled data and the full-colour bars represent assessed data. To improve clarity, consider explicitly stating this in the figure caption for easier understanding."
Line 380: Remove sentence – you are already submitting to a peer-reviewed journal.
Conclusion:
I do not have any major issues with this manuscript; the research is solid, and the conclusions are well-supported by the data. My comments are minor in nature. Overall, I recommend this manuscript for publication after these minor revisions are addressed.
Citation: https://doi.org/10.5194/egusphere-2024-871-RC2
Interactive computing environment
Quantitative module of avalanche hazard—Data and Code F. Herla, P. Haegeli, S. Horton, and P. Mair https://doi.org/10.17605/OSF.IO/94826
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
261 | 78 | 70 | 409 | 16 | 15 |
- HTML: 261
- PDF: 78
- XML: 70
- Total: 409
- BibTeX: 16
- EndNote: 15
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1