the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall
Abstract. Communities downstream from burned steeplands face increases in debris-flow hazards due to fire effects on soil and vegetation. Rapid postfire hazard assessments have traditionally focused on quantifying spatial variations in debris-flow likelihood and volume in response to design rainstorms. However, a methodology that provides estimates of debris-flow inundation downstream from burned areas based on forecast rainfall would provide decision-makers with information that directly addresses the potential for downstream impacts. We introduce a framework that integrates a 24-hour lead-time ensemble precipitation forecast with debris-flow likelihood, volume, and runout models to produce probabilistic maps of debris-flow inundation. We applied this framework to simulate debris-flow inundation associated with the 9 January 2018 debris-flow event in Montecito, California, USA. Sensitivity analyses indicate that reducing uncertainty in postfire debris-flow volume prediction will have the largest impact on reducing inundation outcome uncertainty. The study results are an initial step toward an operational hazard assessment product that includes debris-flow inundation.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1850 KB)
-
Supplement
(3064 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1850 KB) - Metadata XML
-
Supplement
(3064 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1931', Anonymous Referee #1, 08 Oct 2023
I suggest the manuscript be rejected for the following reasons:
- The paper is an incomplete work. As stated by the authors, “This study is a first step toward the development of an operational framework for probabilistic assessments…”. Most of the main texts, pages 3-7, are describing the methods, only half of a page is demonstrating the results, and one-half of a page is demonstrating the discussion. No innovation result/conclusion may be found in the manuscript. The paper may be rewritten and then resubmitted after the research work has been completed.
- The structures of the abstract and conclusions are also strange. For example, in the part Conclusions, it is concluded in the first sentence: “Probabilistic debris-flow inundation maps may support emergency management and hazard mitigation efforts in advance of forecasted storms over burned areas”. Have you given enough evidences to verify this conclusion? The second sentence of the part, “To explore the feasibility of producing such maps, we integrated output…, is describing the methods, which is not encouraged to emerge in the part Conclusions.
- The title of the paper shown in the webpage, “Online repository for code and data used in: Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”, is different to that in the manuscript, “Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”.
Citation: https://doi.org/10.5194/egusphere-2023-1931-RC1 -
AC1: 'Reply on RC1', Alexander Prescott, 21 Dec 2023
Below, we provide our responses to the comments of Reviewer 1 (Anonymous). The reviewer's comments are in quotes and are followed by our response.
"I suggest the manuscript be rejected for the following reasons:
- The paper is an incomplete work. As stated by the authors, “This study is a first step toward the development of an operational framework for probabilistic assessments…”. Most of the main texts, pages 3-7, are describing the methods, only half of a page is demonstrating the results, and one-half of a page is demonstrating the discussion. No innovation result/conclusion may be found in the manuscript. The paper may be rewritten and then resubmitted after the research work has been completed."
Response
We think that the work presented in this manuscript represents a complete study. We welcome this opportunity to clarify our objectives and to clarify some of the language used in the initial submission. Our use of the term “operational” is drawn from the context of the National Oceanic and Atmospheric Administration (NOAA) Readiness Levels (https://orta.research.noaa.gov/support/readiness-levels/). The nine NOAA Readiness Levels provide a systematic way for determining the maturity of a research project, from theoretical development (Readiness Level 1) through the various stages of project development (including experimentation, evaluation, user documentation, etc.) up to full deployment and routine use of the project by the intended audience (Readiness Level 9). In this context, the term “operational” is reserved for methods or tools in the final stages of development that have been deployed in a near-real world environment (Readiness Levels 7-9). As a proof-of-concept and feasibility study, this paper falls in the early phase of development of a method and would be classified in Readiness Level 3.
This work is an important and necessary step towards a fully operational (Readiness Level 9) rapid postfire debris-flow inundation hazard assessment tool, but there is far more intermediate work to accomplish before such a tool may be regularly used for decision making. Without this modeling framework linking together the various rainfall and debris-flow prediction components, we have no way of understanding the sensitivities of the model output to each of the input parameters. Attempting to go from Readiness Level 1 to a Readiness Level 9 product in one publication is not feasible, not typical, and would be unable to resolve all the open research questions that stand in the way of such a product. Our work is therefore an initial and critical step towards a product applicable in the real world.
To clarify the objectives of our study, we have chosen to remove the term “operational” from the manuscript entirely. In the Abstract, we have replaced the sentence quoted by the reviewer with: “This study represents a first step towards a near-real time hazard assessment product that includes probabilistic estimates of debris-flow inundation and provides guidance for future improvements to this and similar model frameworks by identifying key sources of uncertainty.”
In order to better highlight the results and conclusions of this work, we have rewritten the Conclusions section entirely. We describe this further in the response to the reviewer’s second comment.
- "The structures of the abstract and conclusions are also strange. For example, in the part Conclusions, it is concluded in the first sentence: “Probabilistic debris-flow inundation maps may support emergency management and hazard mitigation efforts in advance of forecasted storms over burned areas”. Have you given enough evidences to verify this conclusion? The second sentence of the part, “To explore the feasibility of producing such maps, we integrated output…, is describing the methods, which is not encouraged to emerge in the part Conclusions."
Response
We have rewritten the Conclusions section to better highlight the results and main takeaways from our study. This includes removal of the lines that the reviewer has quoted. A summary of the primary contributions and conclusions of this study are:
- We created a computational framework for probabilistic predictions of rainfall induced debris-flow inundation downstream of burned basins that integrates an ensemble forecast of rainfall with existing models for postfire debris-flow likelihood, volume, and runout.
- We applied this framework using a 24-hour, 100-member atmospheric model ensemble forecast of rainfall intensity associated with a destructive debris-flow event that followed the 2017 Thomas Fire. Approximately 99% of the observed inundation area was contained within a region where the simulated probability of inundation was greater than zero.
- Debris-flow volume had the greatest influence on the simulated area inundated, while the flow mobility parameters had a lesser but still significant influence.
- Reducing uncertainty in predictions of postfire debris-flow volume and predictions of rainfall intensities over sub-hourly durations (e.g. 15 minutes), a key input for postfire debris-flow volume models, would lead to the greatest gains in model performance.
- Future work towards a near-real time product aimed at enhancing decision support for postfire debris-flow hazards may build on this work by exploring the role of alternative modeling framework components and the performance of the framework when applied to diverse locations.
With regard to the need for the modeling framework presented in this study, we point to recent publications that surveyed professionals involved in postfire flooding and debris-flow emergency management. A workshop convened in 2019 brought together over 40 participants from governmental, non-governmental, emergency management, and academic institutions for the purposes of assessing the state of the science in postfire hydrology and to guide future efforts to improve society’s ability to mitigate postfire hazards (Gourley et al., 2020). Participants in this workshop identified the long-term goal of developing methodologies that link spatial estimates of rainfall intensity with surface hydrological models to rapidly map potential debris-flow runout paths in real time. This goal, originally stated by a 2005 NOAA and U. S. Geological Survey (USGS) joint task force, remains unmet today (Gourley et al., 2020; NOAA-USGS Debris Flow Task Force, 2005).
In addition, a recent user needs assessment surveyed postfire emergency management professionals in Southern California on the topic of postfire debris-flow inundation hazard products (Barnhart et al., 2023). Participants expressed a perceived need for a real time product that can map potential postfire debris-flow inundation hazards in response to forecast rainfall, and they expressed that such a product should convey a measure of the forecast uncertainties. We discuss these points in the text on lines 43-45 and line 65. Finally, we note that probabilistic forecasts generated from ensembles can improve decision making quality when the probabilistic information is presented appropriately (Ripberger et al., 2022). We have modified the manuscript to include the conclusions of Gourley et al. (2020) and Ripberger et al. (2022).
Motivated by this comment as well as comments from Reviewer 2, we moved several figures from the supplement to the main text. One of these figures (Fig. S5 in the original submission, now Table 1) displays the results of the global PAWN sensitivity analysis, which makes it easier to describe how flow volume and flow mobility parameters influence modeled area inundated. Results of the global and spatially distributed sensitivity analyses provide guidance for future work by isolating components of the framework (i.e., models for debris-flow volume) where improvements are likely to lead to the greatest gains in overall model performance. We include additional discussion in the revised manuscript that focuses on the implications of our sensitivity analysis and provides guidance for improving this and similar computational frameworks for postfire debris-flow hazards.
- "The title of the paper shown in the webpage, “Online repository for code and data used in: Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”, is different to that in the manuscript, “Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”."
Response
As far as we are able to tell, the correct titles appear in the expected places in both our submission materials and on the EGUsphere online portal. The title of the online repository that hosts all of the modeling code and data used in our submission, “Online repository for code and data used in: Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall,” appears correctly under the Assets for review: Model code and software section in our display of the EGUsphere research article record. We followed the NHESS submission directions when preparing this repository and when citing it within our submission. The title of the repository is necessarily different from the title of the manuscript, “Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall,” so as to prevent confusion.
References
Barnhart, K. R., Romero, V. Y., and Clifford, K. R.: User needs assessment for postfire debris-flow inundation hazard products, U.S. Geological Survey Open-File Rep. 2023–1025, 25 pp., https://doi.org/10.3133/ofr20231025, 2023.
Gourley, J. J., Vergara, H., Arthur, A., Clark III, R. A., Staley, D., Fulton, J., Hempel, L., Goodrich, D. C., Rowden, K., and Robichaud, P. R.: Predicting the Floods that Follow the Flames, B. Am. Meteorol. Soc., 101(7), E1101–E1106, https://doi.org/10.1175/BAMS-D-20-0040.1, 2020.
NOAA–USGS Debris Flow Task Force: NOAA-USGS debris-flow warning system—Final report, U.S. Geological Survey Circular 1283, 47 pp., https://doi.org/10.3133/cir1283, 2005.
Ripberger, J., Bell, A., Fox, A., Forney, A., Livingston, W., Gaddie, C., Silva, C., and Jenkins-Smith, H.: Communicating Probability Information in Weather Forecasts: Findings and Recommendations from a Living Systematic Review of the Research Literature, Weather Clim. Soc., 14(2), 481–498, https://doi.org/10.1175/WCAS-D-21-0034.1, 2022.
Citation: https://doi.org/10.5194/egusphere-2023-1931-AC1
-
RC2: 'Comment on egusphere-2023-1931', Paul Santi, 14 Oct 2023
Well written paper based on a solid modeling database.
Figure 3. Should the red circle be “D-f deposit initiation point” since it is where the deposit starts and not the beginning of the actual debris flow
Figure 3 - any statistical quantification of the quality of the prediction?
Figure 5 - axis labels hard to read at that font size
LIne 233ff - I think it might help to show the p=16% map. When I look at Figure 3, it appears in places that true runout matches best with p>40-60 (Buena Vista) and other places it matches better with p<20 (Montecito). In my opinion, Figure 3 is not very convincing of the accuracy of the forecast system.
I found it a bit difficult to follow the graphical story at times because there was so much reliance and reference to supplemental figures. I realize that the publisher may limit the number of figures included with the paper, but it would help to look for ways that the reader could still understand the ideas and be convinced of the reliability of the model without needing to refer to supplemental figures.
Citation: https://doi.org/10.5194/egusphere-2023-1931-RC2 -
AC2: 'Reply on RC2', Alexander Prescott, 21 Dec 2023
Below, we provide our responses to the comments of Reviewer 2 (Paul Santi). The reviewer's comments are in quotes and are followed by our response.
"Well written paper based on a solid modeling database.
Figure 3. Should the red circle be “D-f deposit initiation point” since it is where the deposit starts and not the beginning of the actual debris flow"
Response
The reviewer is correct, and we have edited Fig. 3 (and all other relevant figures) to refer to these as “ProDF starting points,” consistent with Gorr et al. (2022).
"Figure 3 - any statistical quantification of the quality of the prediction?"
Response
We think that the best way to evaluate the quality of the probabilistic predictions is through the reliability diagrams because they represent the full joint distribution of the observations and predictions (Wilks, 2019). To aid the reader in assessing the prediction quality, we have moved the reliability diagrams associated with the WRF ensemble forecast of debris-flow inundation into a figure panel with the map of forecast probabilities (formerly Fig. 4a-b, now Fig. 3b-c).
If the reviewer desires a single metric quantifying the prediction quality, then the Brier score may be used (Wilks, 2019). The Brier score is essentially the normalized sum of squares of the difference in every grid cell between the observed inundation (i.e., 0 or 1) and the forecast probability (i.e., a real number between 0 and 1). More accurate forecasts produce lower Brier scores. The Brier score of the WRF ensemble forecast is 0.055, that of Scenario A is 0.047, and that of Scenario B is 0.052. However, we decided not to introduce the Brier score into the revised manuscript since we think that the quality of the prediction is best assessed using the reliability diagram, and we hesitate to introduce an additional model performance statistic into an already crowded Methods section.
"Figure 5 - axis labels hard to read at that font size"
Response
We have increased the font size where needed in all figures to improve readability.
"Line 233ff - I think it might help to show the p=16% map. When I look at Figure 3, it appears in places that true runout matches best with p>40-60 (Buena Vista) and other places it matches better with p<20 (Montecito). In my opinion, Figure 3 is not very convincing of the accuracy of the forecast system."
Response
We have moved the panel figure of binary inundation maps from the supplement into the main text (formerly Fig. S3, now Fig. 4), and we have added additional text describing this panel figure to the Results section.
We emphasize that the method we present does not rely on optimizing any of the model components in an attempt to fit the simulations to the observations. For example, we use a 24-hour 100-member ensemble to determine the 15-minute rainfall intensities that are used for determining debris-flow likelihood and volume. Accordingly, we think that the appropriate way to assess model performance using Fig. 3 is to compare the observed inundation with the simulated area inundated. Fig. 3 shows that 94% of the observed area inundated was contained within a region where the probability of inundation exceeded 1% and that value increases to 99% when considering the region of all probabilities greater than zero. We conclude that the observation was contained within the range of inundation scenarios represented by the ensemble forecast. In general, we find that the model under-forecasts inundation extent. We explore the underlying cause of this by comparing forecast results with two alternative scenarios, which we refer to as Scenarios A and B, in which debris-flow volumes are defined based on the observed volume and the observed rainfall intensity (rather than the rainfall intensities from the atmospheric model ensemble). Based on these numerical experiments, we determined that we could attribute the under-forecast of inundation extent primarily to the under-forecasting of peak 15-minute rainfall intensities in the atmospheric model ensemble. The reliability diagram for Scenario A demonstrates that when the debris-flow volume is well known, the forecast is well-calibrated (in that points on the calibration curve lie close to the one-to-one line) and refined (a.k.a. sharp, in that probabilities close to zero and one are predicted most often).
"I found it a bit difficult to follow the graphical story at times because there was so much reliance and reference to supplemental figures. I realize that the publisher may limit the number of figures included with the paper, but it would help to look for ways that the reader could still understand the ideas and be convinced of the reliability of the model without needing to refer to supplemental figures."
Response
We have moved multiple figures from the supplement into the main text to address the reviewer’s point and improve the graphical story told by the manuscript. Specifically, we moved Fig. S3 (now Fig. 4), Fig. S5 (now Table 1), and Fig. S7 (now Fig. 5) into the main text. We also incorporated the reliability diagram panels of Fig. 4 into the new Fig. 3 and Fig. 5 for improved connection between each probabilistic map and the corresponding reliability diagrams. We also added latitudinally-binned averages of the sensitivity indices to the maps of spatially-distributed sensitivity indices to clarify patterns as a function of distance along the fan (formerly Fig. 5a-c, now Fig. 6a-c).
References
Gorr, A. N., McGuire, L. A., Youberg, A. M., and Rengers, F. K.: A progressive flow-routing model for rapid assessment of debris-flow inundation, Landslides, 19, 2055–2073, https://doi.org/10.1007/s10346-022-01890-y, 2022.
Wilks, D. S.: Statistical Methods in the Atmospheric Sciences, Fourth Edition, Elsevier, Amsterdam, Netherlands, 818 pp., https://doi.org/10.1016/C2017-0-03921-6, 2019.
Citation: https://doi.org/10.5194/egusphere-2023-1931-AC2
-
AC2: 'Reply on RC2', Alexander Prescott, 21 Dec 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1931', Anonymous Referee #1, 08 Oct 2023
I suggest the manuscript be rejected for the following reasons:
- The paper is an incomplete work. As stated by the authors, “This study is a first step toward the development of an operational framework for probabilistic assessments…”. Most of the main texts, pages 3-7, are describing the methods, only half of a page is demonstrating the results, and one-half of a page is demonstrating the discussion. No innovation result/conclusion may be found in the manuscript. The paper may be rewritten and then resubmitted after the research work has been completed.
- The structures of the abstract and conclusions are also strange. For example, in the part Conclusions, it is concluded in the first sentence: “Probabilistic debris-flow inundation maps may support emergency management and hazard mitigation efforts in advance of forecasted storms over burned areas”. Have you given enough evidences to verify this conclusion? The second sentence of the part, “To explore the feasibility of producing such maps, we integrated output…, is describing the methods, which is not encouraged to emerge in the part Conclusions.
- The title of the paper shown in the webpage, “Online repository for code and data used in: Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”, is different to that in the manuscript, “Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”.
Citation: https://doi.org/10.5194/egusphere-2023-1931-RC1 -
AC1: 'Reply on RC1', Alexander Prescott, 21 Dec 2023
Below, we provide our responses to the comments of Reviewer 1 (Anonymous). The reviewer's comments are in quotes and are followed by our response.
"I suggest the manuscript be rejected for the following reasons:
- The paper is an incomplete work. As stated by the authors, “This study is a first step toward the development of an operational framework for probabilistic assessments…”. Most of the main texts, pages 3-7, are describing the methods, only half of a page is demonstrating the results, and one-half of a page is demonstrating the discussion. No innovation result/conclusion may be found in the manuscript. The paper may be rewritten and then resubmitted after the research work has been completed."
Response
We think that the work presented in this manuscript represents a complete study. We welcome this opportunity to clarify our objectives and to clarify some of the language used in the initial submission. Our use of the term “operational” is drawn from the context of the National Oceanic and Atmospheric Administration (NOAA) Readiness Levels (https://orta.research.noaa.gov/support/readiness-levels/). The nine NOAA Readiness Levels provide a systematic way for determining the maturity of a research project, from theoretical development (Readiness Level 1) through the various stages of project development (including experimentation, evaluation, user documentation, etc.) up to full deployment and routine use of the project by the intended audience (Readiness Level 9). In this context, the term “operational” is reserved for methods or tools in the final stages of development that have been deployed in a near-real world environment (Readiness Levels 7-9). As a proof-of-concept and feasibility study, this paper falls in the early phase of development of a method and would be classified in Readiness Level 3.
This work is an important and necessary step towards a fully operational (Readiness Level 9) rapid postfire debris-flow inundation hazard assessment tool, but there is far more intermediate work to accomplish before such a tool may be regularly used for decision making. Without this modeling framework linking together the various rainfall and debris-flow prediction components, we have no way of understanding the sensitivities of the model output to each of the input parameters. Attempting to go from Readiness Level 1 to a Readiness Level 9 product in one publication is not feasible, not typical, and would be unable to resolve all the open research questions that stand in the way of such a product. Our work is therefore an initial and critical step towards a product applicable in the real world.
To clarify the objectives of our study, we have chosen to remove the term “operational” from the manuscript entirely. In the Abstract, we have replaced the sentence quoted by the reviewer with: “This study represents a first step towards a near-real time hazard assessment product that includes probabilistic estimates of debris-flow inundation and provides guidance for future improvements to this and similar model frameworks by identifying key sources of uncertainty.”
In order to better highlight the results and conclusions of this work, we have rewritten the Conclusions section entirely. We describe this further in the response to the reviewer’s second comment.
- "The structures of the abstract and conclusions are also strange. For example, in the part Conclusions, it is concluded in the first sentence: “Probabilistic debris-flow inundation maps may support emergency management and hazard mitigation efforts in advance of forecasted storms over burned areas”. Have you given enough evidences to verify this conclusion? The second sentence of the part, “To explore the feasibility of producing such maps, we integrated output…, is describing the methods, which is not encouraged to emerge in the part Conclusions."
Response
We have rewritten the Conclusions section to better highlight the results and main takeaways from our study. This includes removal of the lines that the reviewer has quoted. A summary of the primary contributions and conclusions of this study are:
- We created a computational framework for probabilistic predictions of rainfall induced debris-flow inundation downstream of burned basins that integrates an ensemble forecast of rainfall with existing models for postfire debris-flow likelihood, volume, and runout.
- We applied this framework using a 24-hour, 100-member atmospheric model ensemble forecast of rainfall intensity associated with a destructive debris-flow event that followed the 2017 Thomas Fire. Approximately 99% of the observed inundation area was contained within a region where the simulated probability of inundation was greater than zero.
- Debris-flow volume had the greatest influence on the simulated area inundated, while the flow mobility parameters had a lesser but still significant influence.
- Reducing uncertainty in predictions of postfire debris-flow volume and predictions of rainfall intensities over sub-hourly durations (e.g. 15 minutes), a key input for postfire debris-flow volume models, would lead to the greatest gains in model performance.
- Future work towards a near-real time product aimed at enhancing decision support for postfire debris-flow hazards may build on this work by exploring the role of alternative modeling framework components and the performance of the framework when applied to diverse locations.
With regard to the need for the modeling framework presented in this study, we point to recent publications that surveyed professionals involved in postfire flooding and debris-flow emergency management. A workshop convened in 2019 brought together over 40 participants from governmental, non-governmental, emergency management, and academic institutions for the purposes of assessing the state of the science in postfire hydrology and to guide future efforts to improve society’s ability to mitigate postfire hazards (Gourley et al., 2020). Participants in this workshop identified the long-term goal of developing methodologies that link spatial estimates of rainfall intensity with surface hydrological models to rapidly map potential debris-flow runout paths in real time. This goal, originally stated by a 2005 NOAA and U. S. Geological Survey (USGS) joint task force, remains unmet today (Gourley et al., 2020; NOAA-USGS Debris Flow Task Force, 2005).
In addition, a recent user needs assessment surveyed postfire emergency management professionals in Southern California on the topic of postfire debris-flow inundation hazard products (Barnhart et al., 2023). Participants expressed a perceived need for a real time product that can map potential postfire debris-flow inundation hazards in response to forecast rainfall, and they expressed that such a product should convey a measure of the forecast uncertainties. We discuss these points in the text on lines 43-45 and line 65. Finally, we note that probabilistic forecasts generated from ensembles can improve decision making quality when the probabilistic information is presented appropriately (Ripberger et al., 2022). We have modified the manuscript to include the conclusions of Gourley et al. (2020) and Ripberger et al. (2022).
Motivated by this comment as well as comments from Reviewer 2, we moved several figures from the supplement to the main text. One of these figures (Fig. S5 in the original submission, now Table 1) displays the results of the global PAWN sensitivity analysis, which makes it easier to describe how flow volume and flow mobility parameters influence modeled area inundated. Results of the global and spatially distributed sensitivity analyses provide guidance for future work by isolating components of the framework (i.e., models for debris-flow volume) where improvements are likely to lead to the greatest gains in overall model performance. We include additional discussion in the revised manuscript that focuses on the implications of our sensitivity analysis and provides guidance for improving this and similar computational frameworks for postfire debris-flow hazards.
- "The title of the paper shown in the webpage, “Online repository for code and data used in: Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”, is different to that in the manuscript, “Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall”."
Response
As far as we are able to tell, the correct titles appear in the expected places in both our submission materials and on the EGUsphere online portal. The title of the online repository that hosts all of the modeling code and data used in our submission, “Online repository for code and data used in: Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall,” appears correctly under the Assets for review: Model code and software section in our display of the EGUsphere research article record. We followed the NHESS submission directions when preparing this repository and when citing it within our submission. The title of the repository is necessarily different from the title of the manuscript, “Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall,” so as to prevent confusion.
References
Barnhart, K. R., Romero, V. Y., and Clifford, K. R.: User needs assessment for postfire debris-flow inundation hazard products, U.S. Geological Survey Open-File Rep. 2023–1025, 25 pp., https://doi.org/10.3133/ofr20231025, 2023.
Gourley, J. J., Vergara, H., Arthur, A., Clark III, R. A., Staley, D., Fulton, J., Hempel, L., Goodrich, D. C., Rowden, K., and Robichaud, P. R.: Predicting the Floods that Follow the Flames, B. Am. Meteorol. Soc., 101(7), E1101–E1106, https://doi.org/10.1175/BAMS-D-20-0040.1, 2020.
NOAA–USGS Debris Flow Task Force: NOAA-USGS debris-flow warning system—Final report, U.S. Geological Survey Circular 1283, 47 pp., https://doi.org/10.3133/cir1283, 2005.
Ripberger, J., Bell, A., Fox, A., Forney, A., Livingston, W., Gaddie, C., Silva, C., and Jenkins-Smith, H.: Communicating Probability Information in Weather Forecasts: Findings and Recommendations from a Living Systematic Review of the Research Literature, Weather Clim. Soc., 14(2), 481–498, https://doi.org/10.1175/WCAS-D-21-0034.1, 2022.
Citation: https://doi.org/10.5194/egusphere-2023-1931-AC1
-
RC2: 'Comment on egusphere-2023-1931', Paul Santi, 14 Oct 2023
Well written paper based on a solid modeling database.
Figure 3. Should the red circle be “D-f deposit initiation point” since it is where the deposit starts and not the beginning of the actual debris flow
Figure 3 - any statistical quantification of the quality of the prediction?
Figure 5 - axis labels hard to read at that font size
LIne 233ff - I think it might help to show the p=16% map. When I look at Figure 3, it appears in places that true runout matches best with p>40-60 (Buena Vista) and other places it matches better with p<20 (Montecito). In my opinion, Figure 3 is not very convincing of the accuracy of the forecast system.
I found it a bit difficult to follow the graphical story at times because there was so much reliance and reference to supplemental figures. I realize that the publisher may limit the number of figures included with the paper, but it would help to look for ways that the reader could still understand the ideas and be convinced of the reliability of the model without needing to refer to supplemental figures.
Citation: https://doi.org/10.5194/egusphere-2023-1931-RC2 -
AC2: 'Reply on RC2', Alexander Prescott, 21 Dec 2023
Below, we provide our responses to the comments of Reviewer 2 (Paul Santi). The reviewer's comments are in quotes and are followed by our response.
"Well written paper based on a solid modeling database.
Figure 3. Should the red circle be “D-f deposit initiation point” since it is where the deposit starts and not the beginning of the actual debris flow"
Response
The reviewer is correct, and we have edited Fig. 3 (and all other relevant figures) to refer to these as “ProDF starting points,” consistent with Gorr et al. (2022).
"Figure 3 - any statistical quantification of the quality of the prediction?"
Response
We think that the best way to evaluate the quality of the probabilistic predictions is through the reliability diagrams because they represent the full joint distribution of the observations and predictions (Wilks, 2019). To aid the reader in assessing the prediction quality, we have moved the reliability diagrams associated with the WRF ensemble forecast of debris-flow inundation into a figure panel with the map of forecast probabilities (formerly Fig. 4a-b, now Fig. 3b-c).
If the reviewer desires a single metric quantifying the prediction quality, then the Brier score may be used (Wilks, 2019). The Brier score is essentially the normalized sum of squares of the difference in every grid cell between the observed inundation (i.e., 0 or 1) and the forecast probability (i.e., a real number between 0 and 1). More accurate forecasts produce lower Brier scores. The Brier score of the WRF ensemble forecast is 0.055, that of Scenario A is 0.047, and that of Scenario B is 0.052. However, we decided not to introduce the Brier score into the revised manuscript since we think that the quality of the prediction is best assessed using the reliability diagram, and we hesitate to introduce an additional model performance statistic into an already crowded Methods section.
"Figure 5 - axis labels hard to read at that font size"
Response
We have increased the font size where needed in all figures to improve readability.
"Line 233ff - I think it might help to show the p=16% map. When I look at Figure 3, it appears in places that true runout matches best with p>40-60 (Buena Vista) and other places it matches better with p<20 (Montecito). In my opinion, Figure 3 is not very convincing of the accuracy of the forecast system."
Response
We have moved the panel figure of binary inundation maps from the supplement into the main text (formerly Fig. S3, now Fig. 4), and we have added additional text describing this panel figure to the Results section.
We emphasize that the method we present does not rely on optimizing any of the model components in an attempt to fit the simulations to the observations. For example, we use a 24-hour 100-member ensemble to determine the 15-minute rainfall intensities that are used for determining debris-flow likelihood and volume. Accordingly, we think that the appropriate way to assess model performance using Fig. 3 is to compare the observed inundation with the simulated area inundated. Fig. 3 shows that 94% of the observed area inundated was contained within a region where the probability of inundation exceeded 1% and that value increases to 99% when considering the region of all probabilities greater than zero. We conclude that the observation was contained within the range of inundation scenarios represented by the ensemble forecast. In general, we find that the model under-forecasts inundation extent. We explore the underlying cause of this by comparing forecast results with two alternative scenarios, which we refer to as Scenarios A and B, in which debris-flow volumes are defined based on the observed volume and the observed rainfall intensity (rather than the rainfall intensities from the atmospheric model ensemble). Based on these numerical experiments, we determined that we could attribute the under-forecast of inundation extent primarily to the under-forecasting of peak 15-minute rainfall intensities in the atmospheric model ensemble. The reliability diagram for Scenario A demonstrates that when the debris-flow volume is well known, the forecast is well-calibrated (in that points on the calibration curve lie close to the one-to-one line) and refined (a.k.a. sharp, in that probabilities close to zero and one are predicted most often).
"I found it a bit difficult to follow the graphical story at times because there was so much reliance and reference to supplemental figures. I realize that the publisher may limit the number of figures included with the paper, but it would help to look for ways that the reader could still understand the ideas and be convinced of the reliability of the model without needing to refer to supplemental figures."
Response
We have moved multiple figures from the supplement into the main text to address the reviewer’s point and improve the graphical story told by the manuscript. Specifically, we moved Fig. S3 (now Fig. 4), Fig. S5 (now Table 1), and Fig. S7 (now Fig. 5) into the main text. We also incorporated the reliability diagram panels of Fig. 4 into the new Fig. 3 and Fig. 5 for improved connection between each probabilistic map and the corresponding reliability diagrams. We also added latitudinally-binned averages of the sensitivity indices to the maps of spatially-distributed sensitivity indices to clarify patterns as a function of distance along the fan (formerly Fig. 5a-c, now Fig. 6a-c).
References
Gorr, A. N., McGuire, L. A., Youberg, A. M., and Rengers, F. K.: A progressive flow-routing model for rapid assessment of debris-flow inundation, Landslides, 19, 2055–2073, https://doi.org/10.1007/s10346-022-01890-y, 2022.
Wilks, D. S.: Statistical Methods in the Atmospheric Sciences, Fourth Edition, Elsevier, Amsterdam, Netherlands, 818 pp., https://doi.org/10.1016/C2017-0-03921-6, 2019.
Citation: https://doi.org/10.5194/egusphere-2023-1931-AC2
-
AC2: 'Reply on RC2', Alexander Prescott, 21 Dec 2023
Peer review completion
Journal article(s) based on this preprint
Model code and software
Online repository for code and data used in: Probabilistic assessment of postfire debris-flow inundation in response to forecast rainfall Alexander B. Prescott, Luke A. McGuire, Kwang-Sung Jun, Katherine R. Barnhart, and Nina S. Oakley https://doi.org/10.5281/zenodo.7838914
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
321 | 122 | 35 | 478 | 52 | 22 | 26 |
- HTML: 321
- PDF: 122
- XML: 35
- Total: 478
- Supplement: 52
- BibTeX: 22
- EndNote: 26
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Alexander B. Prescott
Luke A. McGuire
Kwang-Sung Jun
Katherine R. Barnhart
Nina S. Oakley
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1850 KB) - Metadata XML
-
Supplement
(3064 KB) - BibTeX
- EndNote
- Final revised paper