the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Improved modelling of mountain snowpacks with spatially distributed precipitation bias correction derived from historical reanalysis
Abstract. Accurate estimates of snow water equivalent (SWE) are essential for effective water management in regions dependent on seasonal snowmelt. However, significant biases and high uncertainty in mountain precipitation data products pose significant challenges. This study leverages a SWE reanalysis framework and historical dataset to derive factors that can downscale and bias-correct mountain precipitation in a real-time modelling context. We evaluate through hindcast modelling how different versions of this precipitation bias correction affect errors in 1 April SWE estimates within a representative snow-dominated watershed in the Western U.S. We also evaluate how the additional assimilation of fractional snow-covered area (fSCA) or snow depth observations during the accumulation season impact the 1 April SWE estimates. Results show that spatially distributed historically informed precipitation bias correction significantly improves SWE estimates, reducing the normalized root mean square difference (NRMSD) by 58 %, increasing the correlation (R) by 43 %, and decreasing mean difference (MD) by 88 %. The primary strength of this bias correction method lies in capturing the spatial distribution of precipitation bias rather than its interannual variability. Assimilating snow depth observations further reduces errors both at the watershed scale (NRMSD less by 46 %) and pixel level in most years, while accumulation season fSCA assimilation is not generally useful. We demonstrate the value of these methods for streamflow forecasts: bias-corrected precipitation improves the correlation between daily simulated snowmelt and observed streamflow by 31–39 % and reduces bias in predicted April–July runoff volumes by 46–52 %. This study highlights how historical SWE reanalysis datasets can be leveraged and applied in a real-time context by informing precipitation bias correction.
- Preprint
(2033 KB) - Metadata XML
-
Supplement
(1009 KB) - BibTeX
- EndNote
Status: open (until 31 Dec 2024)
-
RC1: 'Comment on egusphere-2024-3389', Michael Matiu, 09 Dec 2024
reply
Von Kaenel and Margulis tackle the notoriously difficult task of estimating spatially resolved SWE in complex mountain terrain by using a by-product of their SWE reanalysis framework to bias correct precipitation in forward SWE modelling. Their approach is an interesting and elegant solution to tackle this issue. However, a large part of their approach is based on the spatial precipitation bias corrections factors in SWE modelling, which is requiring some revisions.
Major points:
-
Conceptually, your bias correction is performing two steps in one: A) downscaling of the coarse-resolution input to a much higher resolution and B) bias correction. This might be explaining some unexpected phenomena like 1) near-zero b factors in valleys, 2) annually varying b factors (assuming intensity-dependent biases of MERRA). I don’t expect the authors to change their methodology to test these two steps separately, but maybe the authors find a way to identify these two components. At least, my suggestion is that the authors keep this in mind when discussing results.
-
Related: The uniform b fails to capture spatial patterns because your meteo input has effectively only 2 grid cells. I’m not sure how you derived at smooth annual precip like in Fig 2b, but it’s spatial patterns are almost inverse to the posterior precip. With a spatially better resolved input (better downscaling?), probably also a uniform b would be better, no? Or maybe not, if the biases depend on elevation or other factors, such as the land surface model?
-
Discussion is missing.
Minor points:
-
Abstract: Besides relative improvement absolute numbers would also be helpful.
-
L35: “Remote sensing…” For mountains in particular, or in general? At least, regarding SWE from remote sensing: mountains no, but in flat terrain, partly yes.
-
Introduction is missing the state-of-the-art on precipitation bias correction.
-
Table 1: unclear what the differences are between b subscripts clim, wet, normal, dry
-
Methodology is missing some information, like what temporal resolution and extent, what spatial resolution, how different spatial resolutions were merged, what regridding methods were used, basics of the land surface model with respect to snow modelling (layers, processes, ...), …
-
If I understood correctly, the reanalysis framework is applied in hindcast with assimilation of fSCA to derive b distributions (and the target reference), and then these b are applied in forward models under different experiments. Maybe a flowchart could be helpful in visualizing the overall methodology and when/how different datasets are used at what stage or inherited where.
-
Fig 2d: b values around 0 in the valleys means that precipitation is effectively removed from those pixels, right? Does this make sense?
-
Elevation dependency of biases: I assume you do not bias correct temperature input, only precipitation? Could it be that temperature biases (or their elevation dependency) might impact SWE results?
-
Conclusions read more like a summary. Please think of further implications or generalizations. For instance, mountain precipitation is a challenge everywhere, and your method is rather elegant and does not need in-situ data…
-
Regarding the derivation of precipitation correction factors: What would the differences be if you used observational data (in-situ, or gridded like daymet or similar) to bias correct precipitation like it is done eg in a climate change context?
-
From an application point of view in streamflow forecasting: The author’s approach is conceptually similar to hydrological modelling, where often parameters for precipitation correction are calibrated to match observed streamflows. Maybe the authors could briefly discuss on this.
-
Just out of curiosity: Have you tested seasonally varying b factors?
Citation: https://doi.org/10.5194/egusphere-2024-3389-RC1 -
-
RC2: 'Comment on egusphere-2024-3389', Anonymous Referee #2, 13 Dec 2024
reply
General comments
The authors present a data assimilation framework to derive precipitation correction factors, providing improved predictions of snow water equivalent (SWE) at the peak of winter. This research is highly relevant, as such SWE estimates are critical for the reliable management of water resources in catchments reliant on seasonal snowmelt. The methods presented demonstrate promising results, and the study is well-written and provides valuable insights. However, the following questions should be addressed to strengthen the manuscript:
- Accuracy of the “historical reference” dataset: How accurate are the SWE values in the historical reference dataset? Many of the results rely on comparisons with this dataset, yet an evaluation of its associated errors appears to be missing. Including such an assessment or references to past studies would enhance the robustness of the findings.
- Precipitation correction factors vs. snowfall correction factors: Why did the authors choose to derive precipitation correction factors using snow cover fraction observations instead of deriving snowfall correction factors? Since snow cover fraction observations primarily reflect snowfall events rather than total precipitation, further explanation of this choice is warranted.
- Discrepancy in cumulative observed streamflow: How can the cumulative observed streamflow during the snowmelt period be significantly lower than the model predictions, given that (a) simulated SWE seems underestimated compared to the historical reference dataset, and (b) rainfall is excluded from the modelling? Please provide additional information on evaporation rates (e.g. from the land surface model) to clarify whether these losses could justify the observed differences between measured and modeled cumulative runoff.
Addressing these questions in the manuscript would enhance its clarity and impact. Further detailed comments are outlined below. In summary, this is an excellent study that would benefit from some additional discussion and information to the reader.
Specific comments
L 44-45: Please add citations from a wider range of researches than those involved in this study to show that data assimilation is a popular method for improving snow modelling results.
Section 1: The introduction lacks a clear outline of the research-gap that warrants another study using a reanalysis framework for improving SWE estimates. What aspects have been covered by existing studies, and what questions remain open? How does the proposed study contribute to existing literature on the topic? Answers to these questions in the text might help readers to better understand the motivation for this study.
Section 2.1: A short summary of snow and meteorological characteristics would be informative. Snow cover duration, annual precipitation and snowfall, average summer/winter temperature and similar variables.
L 101-102: I think more details about the model are needed, in particular features that are relevant for this study. At what resolution was the simulations performed (this is given later but should be placed here I think)? How was precipitation separated into rain- and snowfall? Is this a single- or multi-layer energy-balance snow model? Currently, the information about the modelling is too sparse.
L 105: What is the spatial and temporal resolution of MERRA2?
L 112-113: What parameters (mean and standard deviation) were used for this distribution?
L 123-128: Information about the actual ensemble size and initial conditions needs to be given here. Additionally, it would be helpful to add some brief information about how the meteorological forcing data was interpolated to the model grid.
L 177-179: Perhaps move this sentence to the previous paragraph, maybe L 174.
Section 2.4: I am wondering if I understand the workflow correctly. Step 1: the reanalysis framework (presented in section 2.3) is used to estimate precipitation correction factors by assimilation of all available fSCA observations; an analysis performed for each water year separately. This step leads to precipitation correction factors as displayed in Figure 2. Step 2: in the forward modelling experiments, this “database” of precipitation correction factors is used in the “historically informed” cases, whereas the uncorrected and uniform cases use assumptions about precipitation not derived in section 2.3. Step 3: in the “data assimilation experiments”, observations of fSCA and SD are assimilated until 1st of April, on top of the precipitation correction factors. However, if my outline above is correct, the sentence on line 182-184 confuses me, since in the “reanalysis framework”, precipitation correction factors are derived by assimilation of fSCA, that are used in Step 2. Thus, the “historically informed” cases actually uses information obtained by data assimilation. Correct or not? If correct, please rephrase the sentence.
L 203: Does Table 1 refer to the table in the current paper, or that of Fang et al. (2022)?
L 216-217: Does this mean that the application of the bias correction factors for wet, normal and dry years dampen annual variations in precipitation contained in the original forcing dataset? If so, maybe add a sentence describing this behaviour of the Case B precipitation corrections.
L 231: Please change from “to SWE” to “to maximum/peak SWE” or similar.
L 240-242: Just curious: What is the average number of fSCA observations available per pixel for each year? It seems Figure S4 (not mentioned in the manuscript) and S5 contains this information. Maybe reference that information somewhere in the paper.
Section 2.5: Why was not snowmelt, liquid precipitation and evaporation from the land surface model used directly here for comparison against observed runoff? How large is the amount of liquid precipitation and evaporation/sublimation from April to August?
L 290: Is the “geographical distinction” eliminated in 2016? I still see an overestimation at high altitude and an underestimation at low elevations when looking at Figure 5b.
L 290-294: What is the error of the “historical reference” SWE dataset? The evaluation is performed against this reference. Is the statement “that the historically informed, spatially distributed bias correction is able to correct biases in input precipitation” justified considering that the “historical reference” may contain errors itself? If so, rephrase the sentence, or add a validation of the “historical reference”. I assume that the errors of the reference SWE dataset has been assessed in other publications. If so, it would be useful to mention the errors associated with the “historical reference” in section 2.3 and cite relevant literature.
Section 3.2.2 and caption to Figure 9: I think it would be helpful to not use the word “reference” here to avoid confusion with the “historical reference” used everywhere else. Instead of “reference”, please use “ASO-derived SWE” or alike.
L 426: Maybe “posterior” instead of “prior” since snow depth data seem to have been assimilated.
L 427-429: What is the meaning of this sentence here? It seems disconnected to the discussion about summary statistics.
L 457-458: How much water is expected to infiltrate into the soil column or evaporate within this basin? I am particularly surprised by the poor performance in predicting cumulative runoff, as illustrated in Fig. 12a-c. In both 2016 and 2017, most experiments significantly overpredict cumulative runoff. Simultaneously, Figs. 7 and 8 reveal an underestimation of SWE by the Case A and Case B experiments. Furthermore, verification against ASO data, as shown in Fig. 10c, indicates a general underestimation of SWE, with the degree of underestimation varying across experiments. In light of these underestimations, could it be reasonable to attribute these runoff overestimations to water losses due to evaporation and soil infiltration? In particular when considering that rainfall seems to be neglected in these water balance calculations.
L 490: Change “happen” to “occur”.
L 482-504: What are the regression coefficients? Do they align (e.g., slope less than one) with the results presented in the first part of section 3.3?
L 530-532: Does this refer to the results presented in Fig. 12 or 13? And what are these improvements compared against?
Technical comments
L 31: Remove “snow water equivalent”.
Figure 3, caption: Should (e) be (d)?
Table 2: What does DOWY mean?
L 347: Picky, but change from 3200 to 3246.
L 368: Change from Fig. 7 to Fig. 8.
L 409: I assume this should be “Fig. 9 h, k” instead of “Fig. 9 k, l”.
L 413: I assume this should be “Fig. 9 i, l” instead of “Fig. 9 h, i”.
L 414: I assume this should be “Fig. 9 g” instead of “Fig. 9 j”.
L 419: I assume this should be “Fig. 9 j” instead of “Fig. 9 g”.
L 427: I assume this should be Fig. 11b instead of Fig. 10b.
Citation: https://doi.org/10.5194/egusphere-2024-3389-RC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
101 | 19 | 7 | 127 | 22 | 2 | 1 |
- HTML: 101
- PDF: 19
- XML: 7
- Total: 127
- Supplement: 22
- BibTeX: 2
- EndNote: 1
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1