the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Assessing the impact of meteorological forcing and its uncertainty on snow modeling and reanalysis
Abstract. Large uncertainties in global model-based snow datasets, particularly in snow water equivalent (SWE), limit our understanding of snow storage and its response to climate change. These uncertainties are sensitive to meteorological inputs used to force offline snow models. In this study, we assessed the impact of three meteorological forcing datasets (i.e., ERA5, MERRA-2, and NLDAS-2) on ensemble SWE estimates within a probabilistic snow modeling and reanalysis framework across three snow-dominated mountainous watersheds in the western US. Prior (open-loop) SWE estimates show significant inter-dataset variability, primarily driven by differences in cumulative snowfall. SWE errors are dominated by bias, and no single-forcing dataset consistently outperforms the others across all domains or elevations. To assess the value of using multiple products, we construct a multi-forcing ensemble using least-square-based weighting informed by prior performance. The multi-forcing ensemble reduces errors compared to individual forcings and improves prior SWE accuracy across all regions. Assimilation of near-peak lidar-derived snow depth substantially corrects prior SWE errors, reducing the influence of forcing-driven biases accumulated during the snowfall season. As a result, random error is the dominant source of posterior error. Although assimilation narrows performance differences, the multi-forcing ensemble still yields slightly better overall accuracy and improved uncertainty characterization. This work demonstrates that integrating diverse meteorological forcings within a data assimilation framework improves SWE estimates (both model-based and reanalysis-based), especially where the optimal forcing dataset is uncertain.
- Preprint
(7922 KB) - Metadata XML
-
Supplement
(5252 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-4505', Benoit Montpetit, 02 Dec 2025
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2025/egusphere-2025-4505/egusphere-2025-4505-RC1-supplement.pdfCitation: https://doi.org/
10.5194/egusphere-2025-4505-RC1 -
RC2: 'Comment on egusphere-2025-4505', Anonymous Referee #2, 03 Dec 2025
Summary and Recommendation
The authors present a study on the impact of meteorological forcing data selection on snowpack modeling in three basins in the U.S.A. Specifically, the study examines three reanalysis datasets (ERA5, MERRA2, NLDAS2) downscaled for input into a land surface model SSiB-SAST) and considers a “multi-forcing” scenario based on a weighting scheme that is informed by errors when compared to lidar-based SWE near peak accumulation. Subsequently, the impact of data assimilation of snow depth is evaluated for both single forcing scenarios and the multi-forcing scenario. Error components (RMSE, bias, unbiased RMSE) are evaluated along with the weights.
I find this is a novel and well written paper that provides a substantial contribution to the literature. I believe this will be a useful paper for the snow modeling community and think it would be suitable for publication after attention to some generally minor comments/issues (see below).
Major comments
- The most major critique I have of the paper is that it does not include a sampling of hydrologic conditions (i.e., wet, dry, and average years). The period analyzed (WY 2019) was a wet, high snow accumulation year in both the California Sierra Nevada and the Colorado Rocky Mountains. The main justification for this year seems to be that lidar snow data were available across all three basins (L. 111-112), but I don’t think it is necessary that the same year be used in distinct regions (CA vs. CO). It is the choice of the authors whether or not to bring additional years (e.g. dry, average) into the analysis, but at a minimum I think the paper should provide more description of the snow/weather conditions in the study year(s) and include some discussion on how the type of snow year may influence the DA (e.g., see Margulis et al. 2019, GRL).
General comments
- I find that Section 3 is more of a “Results” section than a “Results and Discussion” section (as intended). I see minimal elements that make a classic discussion section – e.g, comparisons to other studies, discussions of future research needs, etc. I would suggest adding more discussion elements throughout section 3 (as appropriate), or alternatively making a short subsection at the end of section 3 that provides a more substantive discussion.
- A result that is interesting but not discussed in detail is that there are quite different weights for Aspen versus Gunnison-East (e.g., Tables 3 and 5). This is surprising (at least to me), considering that they are adjacent basins (Fig. 1). Why is this result obtained and what might it suggest about the forcing data and/or the snow in these basins?
Line Comments
- L. 14-20: One nuance that is not conveyed clearly here is that the multi-forcing reduces errors relative to most forcing datasets, but not all forcing datasets. As written, it sounds like the multi-forcing is always the most accurate. Can you convey this nuance while also indicating that the “best” forcing dataset cannot be known a priori, and the “best” dataset may vary in space and time?
- L. 45: “transboundary” is often used in water studies in regard to rivers that cross international political boundaries, which is not true for all the mountain ranges referenced here (e.g., Sierra Nevada). Please reword.
- L. 47: To be more exact, I suggest replacing “estimates” with “process-based estimates” or “hydrological model estimates”. Physical versus statistical approaches for estimating runoff are impacted differently by snow data uncertainty, and I think the sentence is more relevant to the former.
- L. 91: What does “readily available” mean in this context?
- L. 125-126: Need to add downwelling longwave radiation here?
- L. 167-168: Broxton et al. (2016) may also be relevant here.
- L. 270: Suggest include the RMSE^2 equation as a distinct/numbered equation (#4).
- L. 286: For clarity, make this “(i.e., N=120)”.
- L. 321: Remove “variations”.
- L. 322: Replace “differences” with “ranges”?
- L. 342-345 and Fig. 5: The SWE depth versus elevation plots are useful, but I would argue that not all elevation bands are “equal” in a hydrologic sense and a snow storage sense, considering that there may be very different amounts of land area contained in each elevation band (depending on the hypsometry). It would be useful to know how this looks for SWE volume versus elevation, perhaps as a supplementary figure?
- L. 400-401: Note here that the absolute bias of the multi-forcing is still higher than ERA5 and NLDAS2.
- L. 444-446: I do not disagree with this discussion point. However, I think it may also be worth noting that there are quite different time periods for the accumulation season vs. the ablation season, and that means there is more opportunity for errors (bias) to build up in the accumulation season. The accumulation season may be two to three times longer in duration than the ablation season, and in this study the “ablation season” is only partial because the ASO survey occur part of the way through the ablation season (i.e., before complete melt out).
- L. 471: Add “(Fig. 7)” after “unknown”.
- L. 480-481: This is a great point and one that the community should appreciate on the value of SD data assimilation.
- L. 527-528: Add “(class 0)” after “ASO-based SWE” and “(class 3)” after “none of them do” to help clarify the conventions.
Figures and Tables
- Figure 1: In the lower right panel, suggest rounding the mean value to the nearest mm.
- Figure 7 and Figure 9: I found these confusing and it took me some time to finally figure out what they are showing. At first I thought that some of the forcing cases were missing, and it wasn’t clear to me until I read the results text that the “middle” case (not best, not worst) was being omitted. For clarity, consistency, and completeness, I think it would make sense to include all 3 forcing scenarios and the multi-forcing (as in the right panels). You could denote the best/worst by placing a marker above the corresponding bar.
- Figure 12: Would this be better displayed as a table rather than a figure?
References
- Broxton, P. D., Zeng, X., and Dawson, N.: Why Do Global Reanalyses and Land Data Assimilation Products Underestimate Snow Water Equivalent?, Journal of Hydrometeorology, 17, 2743–2761, https://doi.org/10.1175/JHM-D-16-0056.1, 2016.
- Margulis, S. A., Fang, Y., Li, D., Lettenmaier, D. P., and Andreadis, K.: The Utility of Infrequent Snow Depth Images for Deriving Continuous Space‐Time Estimates of Seasonal Snow Water Equivalent, Geophysical Research Letters, 46, 5331–5340, https://doi.org/10.1029/2019GL082507, 2019.
Citation: https://doi.org/10.5194/egusphere-2025-4505-RC2
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 207 | 79 | 22 | 308 | 46 | 14 | 17 |
- HTML: 207
- PDF: 79
- XML: 22
- Total: 308
- Supplement: 46
- BibTeX: 14
- EndNote: 17
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1