the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Benchmarking Snow Fields of ERA5-Land in the Northern Regions of North America
Abstract. Reanalysis products provide new opportunities for assessments of historical Earth System states. This is crucial for snow variables, where ground-based observations are sparse and incomplete, and remote sensing measurements still face limitation. However, because reanalysis data are model-based, their accuracy must be evaluated before being applied in impact and attribution studies. In this study, we assess the accuracy of ERA5-Land's snow cover, snow depth, and Snow Water Equivalent (SWE) across monthly, seasonal, and annual scales, within the ecological regions of Canada and Alaska, regions that are characterized by prolonged seasonal snow cover. Using MODIS satellite snow cover observations and the gridded snow depth/SWE analysis data from the Canadian Meteorological Centre, we conduct a consistent benchmarking of ERA5-Land’s snow fields to (1) identify discrepancies at both gridded and regional scales, (2) evaluate the reproducibility of spatial structure of snow variables, and (3) uncover potential spatial patterns of discrepancies in ERA5-Land's snow statistics. Our results highlight significant discrepancies, particularly for snow depth and SWE, where ERA5-Land tends to grossly overestimate long-term mean values and interannual variability, while underestimating trends, i.e., moderating positive trends and exaggerating negative ones. The discrepancies in SWE, however, are primarily driven by biases in snow depth rather than snow density. Therefore, we advise against the direct use of ERA5-Land's snow depth and SWE in Canada and Alaska. While snow cover and snow density may still be useful for impact and attribution studies, they should be applied with caution and potential bias corrections particularly at local and smaller scales.
- Preprint
(2089 KB) - Metadata XML
-
Supplement
(1001 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
CC1: 'Comment on egusphere-2024-4150', Colleen Mortimer, 27 May 2025
-
AC4: 'Reply on CC1, Drs. Colleen Mortimer and Lawrence Mudryk', Ali Nazemi, 16 Sep 2025
We sincerely appreciate the constructive review provided by Drs. Colleen Mortimer and Lawrence Mudryk from Environment and Climate Change Canada (ECCC). We also apologize for the delay in our response. We intentionally waited until the formal review process and public discussion period had concluded to ensure that we could carefully consider all feedback received, design a coherent revision strategy, and provide an integrated response that addresses both the reviewers’ comments and the broader community input. Please find our detailed, point-by-point responses to your comments below.
Note: Review comments are numbered and responded in the order received.
1. This paper seeks to understand the performance of snow estimates from ERA5-Land over Canada. Canada-specific analysis of the performance of snow products is lacking in the broad scientific literature; this work could help fill this knowledge gap.
Response: Many thanks for reviewing our work. We are happy that you found our work worthy and recognized that this piece can fill the existing gap in the literature with regard to Canada- and Alaska-specific benchmarking of ERA5-Land snow fields. Given the current use of ERA5-Land reanalysis data, even in engineering and design applications, we feel that such specific benchmarking attempts are both timely and necessary to inform future applications.
2. The authors evaluate 3 snow-related variables from the ERA5-Land reanalysis: snow water equivalent (SWE), snow depth (SD), and snow cover fraction (SCF). For each of these three variables, they use a single reference dataset intended to represent "truth". For SCF, the uncertainty in the MODIS dataset may be low enough that a single reference product is sufficient for the evaluation. However, for snow depth and SWE, there is much more uncertainty in historical estimates. At present some of this uncertainty is irreducible, and so previous work has demonstrated the value in using an ensemble of datasets for evaluating SWE (Mudryk et al. 2015; 2025; Mortimer et al. 2020). Ensembles are helpful in providing a range of reasonable values against which outliers can be screened (especially for climatological snow mass and trends (Mudryk et al 2015, 2025)).
Response: Many thanks for this thoughtful comment. We do absolutely agree that given the limitations in current in-situ, remote-sensing, and reanalysis data, reference datasets are in fact “partial truths” and no single dataset can be considered as an “absolute truth”. In our revisions, we will ensure that each snow variable is cross-compared, at least, against two different datasets, and we will perform the benchmarking in both daily (except for snow cover fraction; SCF) and monthly timescales.
3. The author’s decision to rely on CMC as the only reference data for SD and SWE is further complicated because ERA5-Land is optimized for SWE whereas CMC is optimized for SD. Discrepancies between it and CMC may stem either from errors in the SWE values, or in the parameterizations, and the analysis presented does not distinguish which source of error is contributing to the discrepancy.
Response: Thanks for another thoughtful comment. In our revisions, we will consider both ground-based data from NorSWE (thanks for introducing this valuable dataset to us) and CMC-SD analysis for benchmarking SD field of ERA5-Land. For SWE, we will use four data sources, i.e., ground-based data from NorSWE, CMC-SWE, AMSR, and the blended monthly data from ECCC (https://climate-scenarios.canada.ca/?page=blended-snow-data). We do know that ECCC also produced a daily dataset for the northern hemisphere; however, as the daily data includes ERA5-Land product, it can not be used for benchmarking ERA5-Land product itself, because it creates the circularity problem, which is well noted in epistemology and the philosophy of science (i.e., something or part of something cannot be used as a reference for itself). As we show in the current version of the paper, the discrepancy between CMC’s and ERA5-Land’s SWE is mainly due to the discrepancies in SD rather than snow density. In our revision we will also consider cross-comparison of both CMC’s and ERA5-Land’s SD and SWE fields against in-situ data. This will provide a new insight on the nature of error and differences between CMC’s and ERA5-Land’s estimates of SD and SWE.
4. Although the CMC product provides monthly SWE it is not really a SWE product. Instead, climatological snow density values from a lookup table, which don't evolve over the time series, are used to go from SD to SWE. Therefore, the CMC SWE product should not be considered as a reference 'truth'. On the other hand, SWE is the prognostic variable directly simulated in ERA5-Land, while SD is estimated using snow density parameterizations so discrepancies between it and CMC are expected.
Response: We do agree that CMC’s SWE is more of an offline product, and is obtained by multiplying the assimilated/modelled SD by the climatological snow density. This is indeed different from ERA5-Land in which SWE is directly calculated inside the model. In our revision, we do not consider CMC’s SWE as the “truth” but as a mean to show how much SD assimilation can improve the representation of SWE.
Having said that (as we summarized in the Supplement), it should be noted that both CMC and ERA5-Land use the mass balance equation for calculating SWE although with different formulations related to snow density and representing melt. The key differences between CMC and ERA5-Land are in (1) the way that melt is represented (physically-based according to the energy balance in ERA5-Land vs. a conceptual melt factor model in CMC), and (2) the empirical formulations with which the snow density is calculated. Similar to ERA5-Land, SWE is also translated to “first-guess” SD in CMC model by dividing the SWE by the estimated snow density. We showed in our current Discussion section that the difference between SWE estimates in CMC and ERA5-Land is not that much related to estimates of snow density, but it corresponds with discrepancies in estimations of SD, and we believe this is due to the fact that SD is assimilated in CMC. The only objective way to prove this is through comparison with ground data that we plan to do in the revised version.=
5. While the CMC product does assimilate ground observations, these are not available over the entire country and therefore the SD values in the CMC product represent a mixture of information from both ground observations the snow model. This means that away from locations with assimilated data, the assessed differences between CMC and ERA5-Land will just represent differences in the snow models used to produce each product.
Response: That is a great point and it is indeed valid. In our revisions, we will discuss this as a source of “deep uncertainty” and show how much this issue matters in terms of estimations of statistical characteristics of SD and SWE in places where assimilation is not possible and there is no ground-based data to provide a notion of accuracy about any of the available products.
6. We encourage the authors to identify a more appropriate set of SWE products to use in their evaluation and to discuss the limitations of ERA5-Land's SD estimations.
Response: We appreciate your suggestion. As mentioned in our response to your 3rd comment, apart from CMC’s SWE, we will consider three other data sources for benchmarking ERA5-Land’s SWE. We will also extend the discussion on the reasons behind limitations of ERA5-Land SD estimation.
7. Additional data could include other reanalysis datasets and/or in situ data (e.g. NorSWE (Mortimer and Vionnet, 2025; https://zenodo.org/records/15263370) for SWE, global SYNOP network or GHCN-D [https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily] for SD) as used in Mortimer et al 2024 and Mudryk et al 2025.
Response: We will extensively use NorSWE data in our revisions and will consider it as the ground-based truth for SD, SWE and snow density. We also consider two other SWE products, i.e., AMSR and ECCC’s blended SWE product for the purpose of cross-comparison.
For SD we also use NorSWE ground-based data in addition to CMC and we will perform the benchmarking at both daily and monthly timescales.
8. The strategies employed to understand the reproducibility of spatial structure are interesting, if expanded, could provide useful insight about the strengths and limitations of ERA5-Land.
Response: We are happy that you found our analyses of spatial structure interesting and valuable. We believe this aspect is a key difference between our work and other benchmarking studies available in the current literature. We plan to extend this in our revisions.
9. Finally, we have a few minor comments about the treatment of the CMC product and the analysis regions.
9.1 How were the limitations listed in Brown and Brasnett 2010 Section 3.2.1 Warnings and Notices addressed?
Response: We filtered out all the zones identified and consider the data as missing.
9.2 How was permanent land ice accounted for?
Response: This is also masked out using the provided text file in the data directory, i.e., cmc_analysis_lsmask_binary_nogl_v01.2.txt. Also please note that we regridded the data into NASA’s Making Earth System Data Records for Use in Research Environments (MEaSUREs) data for the global landscape Freeze-Thaw Earth System Data Record (FT-ESDR), which filters out the area with permanent land ice.
9.3 Given the snow densities are based on snow classes, did you consider using snow classes instead of ecoregions?
Response: No, we did not. We initially considered doing this in our revision; however, we plan not to do so for two reasons: First, in our revision, we will consider in-situ snow density data from NorSWE for cross-comparison of both CMC’s and ERA5-Land’s snow density. Second, ecozones are broader land classification compared to snow classes and we thought comparing the product at the ecozone scale provides a more unbiased basis for comparison, given the fact that CMC directly uses snow classes (another potential regress problem).
References
Mudryk, L. R., Derksen, C., Kushner, P. J., and Brown, R.: Characterization of Northern Hemisphere Snow Water Equivalent Datasets, 1981–2010, J. Climate, 28, 8037–8051, https://doi.org/10.1175/JCLI-D-15-0229.1, 2015.
Mudryk, L., Mortimer C., Derksen, C., Elias-Chereque, A., Kushner, P.: Benchmarking of SWE products based on outcomes of the SnowPEx+ Intercomparison Project, The Cryosphere, 19, 201–218, https://doi.org/10.5194/tc-19-201-2025, 2025.
Mortimer, C., Mudryk, L., Derksen, C., Luojus, K., Brown, R., Kelly, R., and Tedesco, M.: Evaluation of long-term Northern Hemisphere snow water equivalent products, The Cryosphere, 14, 1579–1594, https://doi.org/10.5194/tc-14-1579-2020, 2020.
Mortimer, C. and Vionnet, V.: Northern Hemisphere in situ snow water equivalent dataset (NorSWE, 1979–2021), Earth Syst. Sci. Data Discuss. [preprint], https://doi.org/10.5194/essd-2024-602, in review, 2025.
Response: Many thanks for providing these valuable references. We tremendously benefited from these and they will majorly guide us through our revisions.
Citation: https://doi.org/10.5194/egusphere-2024-4150-AC4
-
AC4: 'Reply on CC1, Drs. Colleen Mortimer and Lawrence Mudryk', Ali Nazemi, 16 Sep 2025
-
RC1: 'Comment on egusphere-2024-4150', Steven Fassnacht, 27 Jun 2025
General
This paper assesses thew snow information from the ERA5 Land dataset at a 9 km, hourly resolution, in comparison to Moderate Resolution Imaging Spectroradiometer (MODIS) SC data at a 5 km, monthly resolution and the Canadian Meteorological Centre’s (CMC) SD/SWE product at a 24 km, daily/monthly resolution. The ERA5 dataset is being used more and more, and this is an important assessment. There is a mismatch of spatial and temporal resolutions Between the ERA5L and evaluation datasets, and the authors use data harmonization. The final assessment is 1 March 2000 to 31 December 2020 at 25 km on an average monthly time step for October to June (monthly, seasonal, annual). Twenty-one ecological regions across Canada and Alaska were considered.
Overall, this is a lot of work and is the basis for a reasonable paper. However, the paper needs reorganization (e.g., Methods moved from Results and Discussion, improved figures, better explanation on how to interpret figures). There are issues with the “truth” data that are used. Those are not discussed. The Discussion is lacking and needs to circle back to the Introduction.
While it is understandable that the coarsest spatial and temporal resolutions are used to have consistency, the coarsens the datasets being assessed (ERA5L) by 25 times spatially and 720 times temporally. This removes some of the nuance of the finer resolution ERA5L data. The snowpack does not vary drastically on a daily basis, so coarsening to that resolution is acceptable. However, using a monthly time step and averaging the ERA5L data is a problem. The implications need to be discussed further.
While it is acceptable to use the standard deviation (Std Dev – I don’t have sigma handy on my keyboard) to assess interannual variability, this is biased when a trend is present. Specifically, SD could be large because there is a lot of interannual variability, or because there is a large trend. Consider detrending the time series to better assess the interannual variability. If you don’t address this, at least discuss the implications.
The figures are somewhat understandable but need work. I cannot distinguish some of the colors from one another, i.e., there are sets of 3 regions with almost the same color as overall there are only about 6 colors. Perhaps use a sub-set of the 21 regions Further, the rainbow color ramp is difficult for some people who cannot with visual impairment. The individual panes in each figure tend to be small, especially when 12 panes are present (Figures 2 to 5). Make each pane large, homogenize the x-axes range, and remove some of the white space and repeated axes labels. Probably add a letter to each figure pane.
At the beginning of each section in the Results, some methods are introduced in the first paragraph. These should be moved to the Methods section. There is some explanation on how to read the figures, but this seems incomplete. For example, when I look at Figure 2, is there a good shape, i.e., ERA5L is close to the “truth?” I assume that a perfect correlation between ERA5L and “truth” is a vertical line along RD* = 0. It is difficult to interpret what the different curves mean. Referring to Figure 3, the authors use CV (SD / mean) > 1 (vs. < 1), but this cannot be distinguished in Figure 3.
Figures 6 and 7 seem to be meant to explain the previous results. However, they would be better placed in the actual Results section. Similar to above, there are methods presented at the beginning of several of the Discussion sub-sections.
The Discussion does not explain the work outside of the work itself. There are no citations and the authors do not use the literature to explain their findings. As I state above, there are spatial and temporal issues and potential problems with the “truth” datasets. For example, SD/SWE are derived using the Brown et al. (2003) model, but that model has some large assumptions, such as the rain-snow threshold of +2C (this varies with climate and other factors), and the new snow density (the Hedstrom-Pomeroy equation is wrong based on the data that it is fit to). While not stated in Supplement S2 so it is unclear whether it is used in the paper, Brown 2003 do not account for precipitation undercatch and apply a 20% reduction in precipitation to account for sublimation and blowing snow. This is very important across much of the study domain (less so regions 12, 13, 15, 17, 20? – I can’t tell exactly from the colors). The actual difference between precipitation, undercatch, sublimation and blowing snow varies spatially and temporally.
Specific
- Line 33: While there is no complete agreement in snow variables names, we typically call Snow Cover Snow Covered Area, with the short form of SCA. SD is acceptable for snow depth, but d_subscript_s is often used.
- Line 39: SWE and the others are “variables,” not “parameters,” as they change over both space and time. In the next sentence you use “variables.” I recommend using the term “variables” here.
- Lines 42-55: This paragraph is relevant, but it seems somewhat superfluous, as it gives information that is already known to the cryospheric community. It is short but is somewhat of a “throw away” paragraph – consider being more succinct.
- Line 43: I question if in-situ measurements are the most “accurate?” There are various papers that talk about the point to area problem with field measurements.
- Lines 109-110 and Supplement S1: this is a good addition to explain how the Hydrology-Tiled ECMWF Scheme for Surface Exchanges over Land model works to represent the snow variables in ERA5L. Thank you!
- Lines 119-120 and Supplement S2: this is a good addition to explain the “Brown 2003” model.
- Lines 130-131 and Supplement S3: this is a good addition to explain the monthly MODIS time series data.
- Line 135: here you use the work “altitude,” but in the previous part of the sentence you say “elevation.” Elevation is the correct term. Also, be consistent.
- Lines 140, 141, etc.: Data is a plural word, so it should read “The data “are” publicly …”
- Lines 174-179: put these 21 ecological regions in a table. Also give us the area of each. The general location could be helpful, as several are blended together in Figure 1a (and are thus indistinguishable).
- Figure 1 can be improved. Figures 1c and 1d should have the same sized y-axis as they both go from 0 to 100%.
- Line 196: It is the Theil-Sen slope, not just Sen. Also, the two (self) citations provided do not speak specifically to the Mann-Kendall test or Theil-Sen slope – use appropriate citations here.
- While equations 1 and 2 are very simple, they are acceptable since X is a statistic and not a variable.
- Line 211: do you “consider” or “use” the scaled version. Here and in other locations in the text, the language is tenuous.
- Line 222: good idea to consider “brevity,” but are the result presented at the monthly scale? Figures 2, 3, 5, 6, 7 present annual results.
- Lines 224-235: This paragraph is mostly methods and should be moved to that section. More explanation on how to read Figure 2 would be useful – what shape of lines is good, i.e., ERA5L is close to the “truth?”
- Figure 2: this figure is difficult to understand, partly because the individual figures are small. I assume that the right figure is D and the let 3 are RD*? Consider use the same x-axis scales for all 12 figures so that the reader can visually compare the results (at least the same for all RD and for all D). As per my comment about the colors in Figure 1, I cannot tell regions apart. Consider how you can improve upon this – perhaps 10 representative figures. Since ECDF = 0.5 and RD* of 0 are the centre? Perhaps a horizontal dotted line across ECDF = 0.5 would help.
- Figure 3: The caption should read second moment (Std Dev) “versus” the first moment
- Figure 4: apparently the spatial structure can be some other than the 4 states listed? For example, SC for region 1 is along the dashed line for all four statistics. Does that mean that they are the same, i.e., complete agreement?
I stopped examining specifics at this point, as the general reworking of the paper is necessary before the details can be examined.
Citation: https://doi.org/10.5194/egusphere-2024-4150-RC1 -
AC1: 'Reply to comments from RC1, Dr. Steven Fassnacht', Ali Nazemi, 15 Sep 2025
Publisher’s note: this comment is a copy of AC2 and its content was therefore removed on 17 September 2025.
Citation: https://doi.org/10.5194/egusphere-2024-4150-AC1 -
AC2: 'Reply to comments from RC1, Dr. Steven Fassnacht', Ali Nazemi, 16 Sep 2025
We sincerely appreciate the constructive review provided by Dr. Steven Fassnacht. We also apologize for the delay in our response. We intentionally waited until the formal review process and public discussion period had concluded to ensure that we could carefully consider all feedback received, design a coherent revision strategy, and provide an integrated response that addresses both the reviewers’ comments and the broader community input. Please find our detailed, point-by-point responses to your comments below.
Note: Review comments are numbered and responded in the order received.
1. This paper assesses thew snow information from the ERA5 Land dataset at a 9 km, hourly resolution, in comparison to Moderate Resolution Imaging Spectroradiometer (MODIS) SC data at a 5 km, monthly resolution and the Canadian Meteorological Centre’s (CMC) SD/SWE product at a 24 km, daily/monthly resolution. The ERA5 dataset is being used more and more, and this is an important assessment. There is a mismatch of spatial and temporal resolutions Between the ERA5L and evaluation datasets, and the authors use data harmonization. The final assessment is 1 March 2000 to 31 December 2020 at 25 km on an average monthly time step for October to June (monthly, seasonal, annual). Twenty-one ecological regions across Canada and Alaska were considered.
Response: Thanks a lot for providing this nice summary of our study.
2. Overall, this is a lot of work and is the basis for a reasonable paper. However, the paper needs reorganization (e.g., Methods moved from Results and Discussion, improved figures, better explanation on how to interpret figures). There are issues with the “truth” data that are used. Those are not discussed. The Discussion is lacking and needs to circle back to the Introduction.
Response: Many thanks for your positive view on our paper. We plan to make a major reorganization/rewriting through our revisions and improve the quality of figures and their corresponding explanations/interpretations. We are also planning to consider using other sources of data to address the issue of “partial truth” and the fact that no single data can be considered as the “absolute truth” for benchmarking snow field of ERA5-Land (hereafter ERA5L). We will also work on a new section in our discussion to compare our work with existing benchmarking efforts in the literature and will link that to the research questions raised in the Introduction.
3. While it is understandable that the coarsest spatial and temporal resolutions are used to have consistency, the coarsens the datasets being assessed (ERA5L) by 25 times spatially and 720 times temporally. This removes some of the nuance of the finer resolution ERA5L data. The snowpack does not vary drastically on a daily basis, so coarsening to that resolution is acceptable. However, using a monthly time step and averaging the ERA5L data is a problem. The implications need to be discussed further.
Response: Many thanks for this insightful comment. Spatially ERA5L is coarsened ~7.7 times (from 9×9 km2 to 25×25 km2), which indeed causes losing some of the spatial variability within ERA5L. Having said that, this is rather inevitable if we plan to make a consistent benchmarking for all snow variables, given the fact that not only CMC, but also other gridded SWE data are rarely available in a finer spatial scale than 25×25 km2. Having said that, we will plan to also benchmark the ERA5L snow fields (except snow cover fraction) at the daily scale using ground observation, which will address your legitimate concern.
4. While it is acceptable to use the standard deviation (Std Dev – I don’t have sigma handy on my keyboard) to assess interannual variability, this is biased when a trend is present. Specifically, SD could be large because there is a lot of interannual variability, or because there is a large trend. Consider detrending the time series to better assess the interannual variability. If you don’t address this, at least discuss the implications.
Response: Thanks a lot for pointing out this very important technical issue. This is absolutely true and we will consider detrending the data in our revised paper before estimating StdDev as a measure of interannual variability.
5. The figures are somewhat understandable but need work. I cannot distinguish some of the colors from one another, i.e., there are sets of 3 regions with almost the same color as overall there are only about 6 colors. Perhaps use a sub-set of the 21 regions Further, the rainbow color ramp is difficult for some people who cannot with visual impairment. The individual panes in each figure tend to be small, especially when 12 panes are present (Figures 2 to 5). Make each pane large, homogenize the x-axes range, and remove some of the white space and repeated axes labels. Probably add a letter to each figure pane.
Response: We are sorry that our initial color coding made it difficult to properly go through and interpret our results. We will change the color coding so that the ecozones can be better distinguished. We will also recreate the figures to improve the readability. Your suggestion for using a sub-set of the 21 regions is very good. In our revisions and depending to the figure and the message we would like to convey, we will either use a subset of ecozones, regroup them in broader land classifications, or change the type of visualization to avoid the problems noted.
6. At the beginning of each section in the Results, some methods are introduced in the first paragraph. These should be moved to the Methods section. There is some explanation on how to read the figures, but this seems incomplete. For example, when I look at Figure 2, is there a good shape, i.e., ERA5L is close to the “truth?” I assume that a perfect correlation between ERA5L and “truth” is a vertical line along RD* = 0. It is difficult to interpret what the different curves mean. Referring to Figure 3, the authors use CV (SD / mean) > 1 (vs. < 1), but this cannot be distinguished in Figure 3.
Response: Many thanks for your comment. We will move all the methodological explanations from Results and include them in the Methods sections. We will also improve the explanation related to each figure to improve the readability.
Regarding the example you mentioned from Figure 2, you are absolutely right: The perfect match would be a vertical line along RD* = 0. Different curves are basically the ECDFs for different ecozones, but we will regroup them so that the readability can be improved. We will also change the axes in Figure 3 so that the CV can be distinguished.
7. Figures 6 and 7 seem to be meant to explain the previous results. However, they would be better placed in the actual Results section. Similar to above, there are methods presented at the beginning of several of the Discussion sub-sections.
Response: That is a fair point. We will include the snow density in our actual benchmarking and not as a point of discussion. We will also ensure that all the methodological details are moved and explained in the Method section.
8. The Discussion does not explain the work outside of the work itself. There are no citations and the authors do not use the literature to explain their findings.
Response: Many thanks for your comment. While we have pointed on some of the previous literature, we do understand that they may be scattered in the text. Through our revision, we will gather all of these and extend them in a standalone part of the Discussion section and position our findings to the previous ones noted in the literature.
9. As I state above, there are spatial and temporal issues and potential problems with the “truth” datasets. For example, SD/SWE are derived using the Brown et al. (2003) model, but that model has some large assumptions, such as the rain-snow threshold of +2C (this varies with climate and other factors), and the new snow density (the Hedstrom-Pomeroy equation is wrong based on the data that it is fit to). While not stated in Supplement S2 so it is unclear whether it is used in the paper, Brown 2003 do not account for precipitation undercatch and apply a 20% reduction in precipitation to account for sublimation and blowing snow. This is very important across much of the study domain (less so regions 12, 13, 15, 17, 20? – I can’t tell exactly from the colors). The actual difference between precipitation, undercatch, sublimation and blowing snow varies spatially and temporally.
Response: We appreciate your thoughtful comment. We do agree the limitations in the CMC’s SD/SWE product and therefore in our revisions, we treat it not as an “absolute truth” but rather as a “partial truth”. For SD, we consider CMC in conjunction with ground-based data from NorSWE (Mortimer, C., & Vionnet, V. (2025). Northern Hemisphere in situ snow water equivalent dataset (NorSWE, 1979–2021). Earth System Science Data, 17(7), 3619-3640). For SWE, we consider CMC’s SWE, NorSWE, and two other products, i.e., satellite-based AMSR as well as blended gridded data from Environment and Climate Change Canada (https://climate-scenarios.canada.ca/?page=blended-snow-data).
Regarding the specific points you mentioned with respect to the limitations of CMC’s SD/SWE product: Those limitations are all valid, however, as modelled SD values are assimilated by ground-based data, we expect that some of these limitations are avoided. To gauge the value of assimilation in our revisions, we will consider cross-comparison with ground-based SD/SWE data.
Regarding snow density, please note that CMC model provides the “first-guess” SD which is then assimilated using ground-based data. The assimilated SD is then multiplied by the static climatological monthly snow density values based on Strum et al., (1995; Sturm, M., J. Holmgren, and G. E. Liston. 1995. A Seasonal Snow Cover Classification System for Local to Global Applications. Journal of Climate 8: 1261–1283) to provide monthly SWE. This was indeed missed in our current explanation in the Supplement S2 and we will provide this in our revised paper.
10. Line 33: While there is no complete agreement in snow variables names, we typically call Snow Cover Snow Covered Area, with the short form of SCA. SD is acceptable for snow depth, but d_subscript_s is often used.
Response: Many thanks for your comment. For snow cover, we actually used Snow Cover Fraction (SCF), which the ratio of grid covered by snow. In our revision, we use SCF throughout the paper. We will keep SD and SWE for snow depth and snow water equivalent. For snow density, we will use “Rho”.
11. Line 39: SWE and the others are “variables,” not “parameters,” as they change over both space and time. In the next sentence you use “variables.” I recommend using the term “variables” here.
Response: This is absolutely right and we ensure that SWE, SD, SCF and Rho are called “variables” throughout our revision.
12. Lines 42-55: This paragraph is relevant, but it seems somewhat superfluous, as it gives information that is already known to the cryospheric community. It is short but is somewhat of a “throw away” paragraph – consider being more succinct.
Response: We will majorly revise this paragraph and others given the fact that we will have a majorly different benchmarking approach by implementing cross-comparison with other data sources.
13. Line 43: I question if in-situ measurements are the most “accurate?” There are various papers that talk about the point to area problem with field measurements.
Response: That is an excellent point and absolutely valid. That is why we believe we are dealing with “partial truths” rather than “the absolute truth” when dealing with benchmarking snow variables and that is why we are planning to use an ensemble of data supports in our revised paper for the purpose of benchmarking ERA5L’s snow fields.
14. Lines 109-110 and Supplement S1: this is a good addition to explain how the Hydrology-Tiled ECMWF Scheme for Surface Exchanges over Land model works to represent the snow variables in ERA5L. Thank you!
Response: Many thanks for your comment. We are pleased that you find our explanation useful and informative.
15. Lines 119-120 and Supplement S2: this is a good addition to explain the “Brown 2003” model.
Response: Many thanks for your comment. We are pleased that you find our explanation useful and informative. We will enhance our explanation in Supplement S2 to ensure that all aspects regarding how assimilated SD converted to SWE using climatological snow density are communicated.
16. Lines 130-131 and Supplement S3: this is a good addition to explain the monthly MODIS time series data.
Response: We are pleased that you find our explanation useful and informative.
17. Line 135: here you use the work “altitude,” but in the previous part of the sentence you say “elevation.” Elevation is the correct term. Also, be consistent.
Response: We will use Elevation throughout the revised paper.
18. Lines 140, 141, etc.: Data is a plural word, so it should read “The data “are” publicly …”
Response: We apologize for this oversight. We ensure correcting this in our revised paper.
19. Lines 174-179: put these 21 ecological regions in a table. Also give us the area of each. The general location could be helpful, as several are blended together in Figure 1a (and are thus indistinguishable).
Response: Many thanks for your comment. We will do so throughout our revisions.
20. Figure 1 can be improved. Figures 1c and 1d should have the same sized y-axis as they both go from 0 to 100%.
Response: Many thanks for your comment. They are both going up to 100%, however, as the sizes of subfigures are different, there is a difficulty in seeing this. We will correct this in our revised paper.
21. Line 196: It is the Theil-Sen slope, not just Sen. Also, the two (self) citations provided do not speak specifically to the Mann-Kendall test or Theil-Sen slope – use appropriate citations here.
Response: Thanks a lot for your comment. This is true that Theil have initiated the idea in 1950s and the complete name is Theil-Sen slope estimator. We will mention this in the first instance and then use the “Sen’s slope” for the sake of brevity. We will also cite the original references. The reason we used those two references from our previous works is the fact that we developed the computational codes used in this current study.
22. While equations 1 and 2 are very simple, they are acceptable since X is a statistic and not a variable.
Response: Many thanks for your comment. In our revised paper, we will also use other criteria, e.g., six Goodness-of-Fit measures, and we ensure that those new criteria are also communicated effectively.
23. Line 211: do you “consider” or “use” the scaled version. Here and in other locations in the text, the language is tenuous.
Response: The right word is “use”. We will clean up the language during the revision.
24. Line 222: good idea to consider “brevity,” but are the result presented at the monthly scale? Figures 2, 3, 5, 6, 7 present annual results.
Response: All monthly and seasonal data are provided in the Supplement (Figures S1 to S6). In our revised paper, we ensure that the link between main text and Supplement is clear throughout the paper
25. Lines 224-235: This paragraph is mostly methods and should be moved to that section. More explanation on how to read Figure 2 would be useful – what shape of lines is good, i.e., ERA5L is close to the “truth?”
Response: We will ensure that all methodological details are moved to the Method section. As you mentioned a vertical line on the RD*=0 and/or D=0 will be the perfect fit. We ensure that these explanations are given for each and every figure in the revised paper.
26. Figure 2: this figure is difficult to understand, partly because the individual figures are small. I assume that the right figure is D and the let 3 are RD*? Consider use the same x-axis scales for all 12 figures so that the reader can visually compare the results (at least the same for all RD and for all D). As per my comment about the colors in Figure 1, I cannot tell regions apart. Consider how you can improve upon this – perhaps 10 representative figures. Since ECDF = 0.5 and RD* of 0 are the centre? Perhaps a horizontal dotted line across ECDF = 0.5 would help.
Response: Many thanks for your comment. We will improve this figure in our revisions.
27. Figure 3: The caption should read second moment (Std Dev) “versus” the first moment
Response: Many thanks for your comment. We will apply this change in the caption during our revisions.
28. Figure 4: apparently the spatial structure can be some other than the 4 states listed? For example, SC for region 1 is along the dashed line for all four statistics. Does that mean that they are the same, i.e., complete agreement?
Response: No, this means that the data is missing and comparison cannot be made. We will change the visualization method for this figure and replace the stair graphs with heatmaps and keep the regions with missing data as blank so that the readability of the results can be improved.
29. I stopped examining specifics at this point, as the general reworking of the paper is necessary before the details can be examined.
Response: Many thanks for taking the time and review our paper. We have tremendously benefited from your comments, which will guide us to prepare a much stronger revised manuscript.
Citation: https://doi.org/10.5194/egusphere-2024-4150-AC2
-
RC2: 'Comment on egusphere-2024-4150', Anonymous Referee #2, 25 Jul 2025
This manuscript tackles an important topic: the benchmarking of ERA5-Land snow cover (SC), snow depth (SD), and snow water equivalent (SWE) across northern North America. The study is technically sound in its execution and the dataset analyzed is valuable, but the manuscript in its current form requires substantial revisions to meet publication standards.
While the topic is timely and relevant, the paper does not sufficiently highlight its novelty or unique contributions compared to previous benchmarking studies. The Discussion section, in particular, is weak: it lacks citations and broader context from other similar studies, and the conclusions mostly restate well-known limitations of reanalysis products without offering new insights or practical guidance for end-users. Overall, the manuscript is overly descriptive and at times repetitive, without providing deeper interpretation or actionable recommendations that could advance the field of snow process modelling.
A key shortcoming is the lack of critical evaluation of uncertainties, such as input biases in the Canadian Meteorological Centre (CMC) data or cloud-masking issues in MODIS. The discussion also fails to situate the results within the broader literature, which is necessary for assessing the significance of the findings. There is also little justification for certain methodological choices (e.g., grid harmonization, use of specific non-parametric tests), and the paper would benefit from synthesizing the main findings in a more structured way.\
I recommend major revisions before this manuscript can be considered for acceptance. The paper needs improved readability, a stronger discussion, and clearer presentation of the key results and their practical implications. Below are detailed, line-specific comments.
Please find below some specific comments
Line 8: The opening sentence is too broad. Suggest revising to something more direct:
“We benchmark ERA5-Land’s snow cover (SC), snow depth (SD), and snow water equivalent (SWE) across northern North America.”
Line 9: Change “limitation” to “limitations”.
Line 11: The phrase “regions that are characterized by prolonged seasonal snow cover” is redundant given the focus on Canada and Alaska.
Line 19: Avoid prescriptive language such as “we advise against direct use.” Instead, write:
“ERA5-Land’s SD and SWE require bias correction before being applied directly in hydrological or ecological modeling.”
Lines 23-30: The introduction is a good start, but it should emphasize what sets this study apart from other benchmarking efforts. Clearly state the unique contribution or new perspective.
Line 38: The paragraph about snowpack as a reservoir is informative but verbose. Consider splitting it into two or more concise sentences.
Line 57: The sentence defining reanalysis and describing its uses is too long. Break it into two parts, one for the definition and one for the applications.
Line 62: Add a bridging sentence explaining how reanalysis datasets fill observational gaps left by sparse in-situ networks and cloud-contaminated satellite retrievals.
Line 65: Rephrase “scale mismatch is often overlooked” to something softer like:
“The issue of scale mismatch has not been fully addressed in prior studies.”
Provide one or two references to back this statement.Line 71: The discussion on downscaling vs. upscaling is solid. It would benefit from citing specific studies that examine the uncertainties introduced by these approaches.
Line 85: The Canadian Rockies example could be more quantitative. For instance, provide typical snow depth values eg. “depths often exceeding 100 cm by mid-winter
Line 105: Ensure that “ERA5-Land” or its abbreviation “ERA5L” is used consistently after its first introduction.
Line 149: Briefly explain why 25 km was chosen as the grid resolution for harmonizing datasets.
Line 196: While Mann-Kendall and Sen’s slope tests are valid, add a quick note on why non-parametric approaches are particularly suitable for snow datasets
Line 210: Provide a short explanation of why a logarithmic RD transformation was used, especially for readers less familiar with this technique.
Line 215: The introduction of ECDF analysis is strong but consider adding a simple example of how bias shows up in ECDF curves
Line 223: When discussing grid-scale discrepancies, include numerical ranges of bias e.g., mean over/underestimation values for SC, SD, and SWE.
Also consider adding a summary table highlighting key discrepancy metrics across ecological regions.Line 243: Check Figure 2’s ECDF axes and colour schemes, ensure they remain clear when printed or scaled down.
Line 246: Add more context when describing “extremely large overestimations.” Are these localized such as in specific ecozones or widespread?
Line 273: When referencing seasonal results, make explicit connections to Supplementary Figures S1–S4 to help guide readers.
Line 276: The seasonal discrepancies discussion is repetitive. Consider merging SC, SD, and SWE discussions into a single, comparative paragraph when trends are similar.
Line 307: Add a line explaining why ERA5-Land might underestimate trends, mention limitations in the model physics or the quality of the forcing data.
Line 370: Provide a brief explanation of why spatial structure weakens at seasonal versus annual scales.
Line 377: Summarize the north-south gradient in snow density discrepancies clearly. Introduce a short subsection in the Discussion specifically addressing uncertainty sources, such as MODIS cloud-masking errors, CMC interpolation, and ERA5-Land physics.
Line 430: Discuss how the spatial patterns of discrepancies might inform bias-correction strategies or regional model tuning.
Line 435: The snow density analysis is insightful but would be stronger with quantitative differences like ERA5L overestimates mean snow density by 15–20 kg/m³ compared to CMC.
Line 442: Clarify whether snow density estimates in CMC and ERA5-Land are based on station observations or modeled parameters.
Line 524: Avoid prescriptive statements like “we advise against…..” Instead, rephrase as:
“ERA5L SD and SWE show systematic biases that limit their direct applicability without bias correction.”
Line 570: Search for and cite recent studies that validate ERA5-Land or other snow reanalysis products, to strengthen the Discussion.
Line 580: Rewrite the closing sentence with a practical takeaway for snow modellers or water resource managers.
Line 600: Check references for consistent formatting including italicizing journal names, adding DOIs
Line 680: Ensure all cited references are included in the reference list and properly formatted.
Citation: https://doi.org/10.5194/egusphere-2024-4150-RC2 -
AC3: 'Reply on RC2, the anonymous reviewer', Ali Nazemi, 16 Sep 2025
We sincerely appreciate the constructive review provided by the anonymous reviewer. We also apologize for the delay in our response. We intentionally waited until the formal review process and public discussion period had concluded to ensure that we could carefully consider all feedback received, design a coherent revision strategy, and provide an integrated response that addresses both the reviewers’ comments and the broader community input. Please find our detailed, point-by-point responses to your comments below.
Note: Review comments are numbered and responded in the order received.
1. This manuscript tackles an important topic: the benchmarking of ERA5-Land snow cover (SC), snow depth (SD), and snow water equivalent (SWE) across northern North America. The study is technically sound in its execution and the dataset analyzed is valuable, but the manuscript in its current form requires substantial revisions to meet publication standards.
Response: Many thanks for your positive evaluation and the concise summery of our work.
2. While the topic is timely and relevant, the paper does not sufficiently highlight its novelty or unique contributions compared to previous benchmarking studies. The Discussion section, in particular, is weak: it lacks citations and broader context from other similar studies, and the conclusions mostly restate well-known limitations of reanalysis products without offering new insights or practical guidance for end-users. Overall, the manuscript is overly descriptive and at times repetitive, without providing deeper interpretation or actionable recommendations that could advance the field of snow process modelling.
Response: Many thanks for this critical comment. Regarding the novelty and unique contribution of our work, to our best knowledge our work marks the first benchmarking study of snow variables that put into the context reproductivity of spatial patterns and search for the spatial pattern in discrepancies. Having said that, we do agree that this may not be highlighted enough in the article. We will ensure that these two key points are properly highlighted. We also plan to add some other novel perspectives in our revision, i.e., cross-comparison with multiple “partial truths” rather than a single “absolute truth” as well as the issue of “deep uncertainty”. By deep uncertainty, we refers to situations in which no truth exist to provide a mean for objective comparison of the products with one another.
With respect to the discussion, your point is fair. While we have bit and pieces here and there in the current version of our paper in which we refer to previous benchmarking studies, we do agree that these are rather scattered. We will include a separate sub-section in the Discussion to gather these and also extend the discussion so that our study can be well positioned within the existing literature. We also ensure that the revised paper is easy-to-follow, is not too descriptive, and do not include any repeated part.
3. A key shortcoming is the lack of critical evaluation of uncertainties, such as input biases in the Canadian Meteorological Centre (CMC) data or cloud-masking issues in MODIS. The discussion also fails to situate the results within the broader literature, which is necessary for assessing the significance of the findings. There is also little justification for certain methodological choices (e.g., grid harmonization, use of specific non-parametric tests), and the paper would benefit from synthesizing the main findings in a more structured way.
Response: Many thanks for this critical comment. We do agree that the evaluation of existing uncertainties in our benchmarking data, i.e., CMC and SWE is not sufficiently done in our initial submission. In order to properly address this, we include new datasets, so that each snow variable from ERA5-Land is cross-compared with at least two different “reference” datasets. Specifically, to quantify noted uncertainties in CMC product, we include ground-based observations so that we can have an objective measure on the error and uncertainties in CMC data itself. Regarding the snow cover fraction (hereafter SCF), we will use a blended dataset from Environment and Climate Change Canada (https://climate-scenarios.canada.ca/?page=blended-snow-data) so that we can address uncertainties in MODIS dataset.
As we mentioned earlier in our response to your 2nd comment above, we plan to have a standalone sub-section in the Discussion to fully position our study with respect to existing benchmarking works in the literature and to assess the significance and practical use of our findings. In this sub-section, we will also include a structured summary of our key findings. With respect to methodological choices, we will also include brief justification to ensure the key reasons behind their implementations are clear.
4. I recommend major revisions before this manuscript can be considered for acceptance. The paper needs improved readability, a stronger discussion, and clearer presentation of the key results and their practical implications. Below are detailed, line-specific comments.
Response: Many thanks for your comment. We do understand that addressing all comments received by you and other reviewers, as well as the further additions we plan to include to make our work a decent benchmarking study, necessitate a major revision. We are committed to produce a revised article that is much improved compared to our initial submission.
5. Line 8: The opening sentence is too broad. Suggest revising to something more direct:
“We benchmark ERA5-Land’s snow cover (SC), snow depth (SD), and snow water equivalent (SWE) across northern North America.”
Response: Many thanks for your suggestion. We will certainly consider this.
6. Line 9: Change “limitation” to “limitations”.
Response: Our apologies for this overlook. This will be corrected.
7. Line 11: The phrase “regions that are characterized by prolonged seasonal snow cover” is redundant given the focus on Canada and Alaska.
Response: We wanted to use this phrase as a point of justification for why Canada and Alaska are chosen for this benchmarking study. But we understand that it is redundant in the Abstract. It will be removed.
8. Line 19: Avoid prescriptive language such as “we advise against direct use.” Instead, write:
“ERA5-Land’s SD and SWE require bias correction before being applied directly in hydrological or ecological modeling.”
Response: Many thanks for your suggestion. We will certainly avoid prescriptive language in our revised paper.
9. Lines 23-30: The introduction is a good start, but it should emphasize what sets this study apart from other benchmarking efforts. Clearly state the unique contribution or new perspective.
Response: Many thanks for the comment. We wanted to set the scene before diving into the specificity of our work. In our revisions, we will shorten this part to get quicker to the unique contributions of our work.
10. Line 38: The paragraph about snowpack as a reservoir is informative but verbose. Consider splitting it into two or more concise sentences.
Response: Many thanks for the comment. We will merge this paragraph with the preceding one and carve the sentences to make it more concise and improve readability.
11. Line 57: The sentence defining reanalysis and describing its uses is too long. Break it into two parts, one for the definition and one for the applications.
Response: Many thanks for your comment. We will do so.
12. Line 62: Add a bridging sentence explaining how reanalysis datasets fill observational gaps left by sparse in-situ networks and cloud-contaminated satellite retrievals.
Response: Thanks for this suggestion. We will add a bridging sentence to mention these points.
13. Line 65: Rephrase “scale mismatch is often overlooked” to something softer like:
“The issue of scale mismatch has not been fully addressed in prior studies.”
Provide one or two references to back this statement.Response: Many thanks for your suggestion. We will make the sentence a bit softer. However, please note that in our revised paper the point related to scale mismatch will be made in a different way and this part of the Introduction will be majorly reworked and rewritten.
14. Line 71: The discussion on downscaling vs. upscaling is solid. It would benefit from citing specific studies that examine the uncertainties introduced by these approaches.
Response: Sure, we will add few references to elaborate on this.
15. Line 85: The Canadian Rockies example could be more quantitative. For instance, provide typical snow depth values e.g. “depths often exceeding 100 cm by mid-winter
Response: Many thanks for your comment. We will report some relevant statistics to give the sentence a more quantitative angel.
16. Line 105: Ensure that “ERA5-Land” or its abbreviation “ERA5L” is used consistently after its first introduction.
Response: Many thanks for this hint. We will ensure this in the revised document.
17. Line 149: Briefly explain why 25 km was chosen as the grid resolution for harmonizing datasets.
Response: We wanted that all our data including the reference CMC go through the same level of processing so that we can make a fair comparison. 25×25 km2 grids are the least upscaling we could do for the 24×24 km2 CMC data.
18. Line 196: While Mann-Kendall and Sen’s slope tests are valid, add a quick note on why non-parametric approaches are particularly suitable for snow datasets
Response: Non-parametric tests do not make any assumption about the distribution of the data (or their residuals) and therefore are more suitable for environmental data, including snow variables. We will communicate this in our revised paper.
19. Line 210: Provide a short explanation of why a logarithmic RD transformation was used, especially for readers less familiar with this technique.
Response: This is to damp the very large discrepancies into a scale that allows observing both small and large discrepancies together. We will add this in our revisions.
20. Line 215: The introduction of ECDF analysis is strong but consider adding a simple example of how bias shows up in ECDF curves
Response: Many thanks for your suggestion. We will certainly do so.
21. Line 223: When discussing grid-scale discrepancies, include numerical ranges of bias e.g., mean over/underestimation values for SC, SD, and SWE. Also consider adding a summary table highlighting key discrepancy metrics across ecological regions.
Response: Adding a summery table is a great idea. We will do so in our revisions.
22. Line 243: Check Figure 2’s ECDF axes and colour schemes, ensure they remain clear when printed or scaled down.
Response: We will ensure this during our revisions.
23. Line 246: Add more context when describing “extremely large overestimations.” Are these localized such as in specific ecozones or widespread?
Response: We mentioned in the same line that such large overestimations, e.g., in the case of long-term mean, can be widespread and include 20 and 18 ecozones for SD and SWE, respectively. We will improve the wording so that this can be better highlighted.
24. Line 273: When referencing seasonal results, make explicit connections to Supplementary Figures S1–S4 to help guide readers.
Response: Many thanks for the suggestion. We will do so in the revised paper and whenever we refer to seasonal and monthly results that are provided in the Supplement.
25. Line 276: The seasonal discrepancies discussion is repetitive. Consider merging SC, SD, and SWE discussions into a single, comparative paragraph when trends are similar.
Response: Thanks a lot for your suggestion. This is great idea and we will implement that in our revisions.
26. Line 307: Add a line explaining why ERA5-Land might underestimate trends, mention limitations in the model physics or the quality of the forcing data.
Response: Sure. We will do so. But we believe this can be more of a discussion point rather than the results. We might briefly mention it in the Results and then elaborate on that in the Discussion.
27. Line 370: Provide a brief explanation of why spatial structure weakens at seasonal versus annual scales.
Response: This is an important point and thanks a lot for the hint. We will discuss this in our revisions.
28. Line 377: Summarize the north-south gradient in snow density discrepancies clearly. Introduce a short subsection in the Discussion specifically addressing uncertainty sources, such as MODIS cloud-masking errors, CMC interpolation, and ERA5-Land physics.
Response: Many thanks for this suggestion. We will better summarize/explain the north-south gradient in the discrepancies of snow density. We will also add a subsection in the Discussion to discuss the uncertainty sources. Having said that, please note that addressing uncertainties through cross-comparison with multiple “reference” data will the backbone of our revisions and will be a key point of discussion in the paper.
29. Line 430: Discuss how the spatial patterns of discrepancies might inform bias-correction strategies or regional model tuning.
Response: Excellent and relevant point! We will certainly mention this in the revised paper. Many thanks for this thoughtful comment.
30. Line 435: The snow density analysis is insightful but would be stronger with quantitative differences like ERA5L overestimates mean snow density by 15–20 kg/m³ compared to CMC.
Response: Sure. We will report relevant quantities in our revised manuscript.
31. Line 442: Clarify whether snow density estimates in CMC and ERA5-Land are based on station observations or modeled parameters.
Response: We have mentioned this in sections S1 and S2 in the Supplement. However, we will briefly mention this here as well to ensure the point is not missed. Having said that, please note that we are planning to bring the snow density into the Results section so this section will be changed.
32. Line 524: Avoid prescriptive statements like “we advise against…..” Instead, rephrase as:
“ERA5L SD and SWE show systematic biases that limit their direct applicability without bias correction.”
Response: Many thanks for the suggested wording. We will ensure that no prescriptive statement will be in the revised document.
33. Line 570: Search for and cite recent studies that validate ERA5-Land or other snow reanalysis products, to strengthen the Discussion.
Response: We will certainly do so in the revised paper.
34. Line 580: Rewrite the closing sentence with a practical takeaway for snow modellers or water resource managers.
Response: We will certainly do so in the revised paper.
35. Line 600: Check references for consistent formatting including italicizing journal names, adding DOIs
Response: We believe that the journal names should be abbreviated but not italicized. We will double check and make sure that we use the right formatting
36. Line 680: Ensure all cited references are included in the reference list and properly formatted.
Response: We will ensure this for current references and the new ones that we plan to add in our revisions.
Citation: https://doi.org/10.5194/egusphere-2024-4150-AC3
-
AC3: 'Reply on RC2, the anonymous reviewer', Ali Nazemi, 16 Sep 2025
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
713 | 117 | 24 | 854 | 51 | 14 | 29 |
- HTML: 713
- PDF: 117
- XML: 24
- Total: 854
- Supplement: 51
- BibTeX: 14
- EndNote: 29
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
This paper seeks to understand the performance of snow estimates from ERA5-Land over Canada. Canada-specific analysis of the performance of snow products is lacking in the broad scientific literature; this work could help fill this knowledge gap.
The authors evaluate 3 snow-related variables from the ERA5-Land reanalysis: snow water equivalent (SWE), snow depth (SD), and snow cover fraction (SCF). For each of these three variables, they use a single reference dataset intended to represent "truth". For SCF, the uncertainty in the MODIS dataset may be low enough that a single reference product is sufficient for the evaluation. However, for snow depth and SWE, there is much more uncertainty in historical estimates. At present some of this uncertainty is irreducible, and so previous work has demonstrated the value in using an ensemble of datasets for evaluating SWE (Mudryk et al. 2015; 2025; Mortimer et al. 2020). Ensembles are helpful in providing a range of reasonable values against which outliers can be screened (especially for climatological snow mass and trends (Mudryk et al 2015, 2025)).
The author’s decision to rely on CMC as the only reference data for SD and SWE is further complicated because ERA5-Land is optimized for SWE whereas CMC is optimized for SD. Discrepancies between it and CMC may stem either from errors in the SWE values, or in the parameterizations, and the analysis presented does not distinguish which source of error is contributing to the discrepancy. Although the CMC product provides monthly SWE it is not really a SWE product. Instead, climatological snow density values from a lookup table, which don't evolve over the time series, are used to go from SD to SWE. Therefore, the CMC SWE product should not be considered as a reference 'truth'. On the other hand, SWE is the prognostic variable directly simulated in ERA5-Land, while SD is estimated using snow density parameterizations so discrepancies between it and CMC are expected. While the CMC product does assimilate ground observations, these are not available over the entire country and therefore the SD values in the CMC product represent a mixture of information from both ground observations the snow model. This means that away from locations with assimilated data, the assessed differences between CMC and ERA5-Land will just represent differences in the snow models used to produce each product.
We encourage the authors to identify a more appropriate set of SWE products to use in their evaluation and to discuss the limitations of ERA5-Land's SD estimations. Additional data could include other reanalysis datasets and/or in situ data (e.g. NorSWE (Mortimer and Vionnet, 2025; https://zenodo.org/records/15263370) for SWE, global SYNOP network or GHCN-D [https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily] for SD) as used in Mortimer et al 2024 and Mudryk et al 2025. The strategies employed to understand the reproducibility of spatial structure are interesting, if expanded, could provide useful insight about the strengths and limitations of ERA5-Land.
Finally, we have a few minor comments about the treatment of the CMC product and the analysis regions.
Minor comments
- How were the limitations listed in Brown and Brasnett 2010 Section 3.2.1 Warnings and Notices addressed?
- How was permanent land ice accounted for?
- Given the snow densities are based on snow classes, did you consider using snow classes instead of ecoregions?
References
Mudryk, L. R., Derksen, C., Kushner, P. J., and Brown, R.: Characterization of Northern Hemisphere Snow Water Equivalent Datasets, 1981–2010, J. Climate, 28, 8037–8051, https://doi.org/10.1175/JCLI-D-15-0229.1, 2015.
Mudryk, L., Mortimer C., Derksen, C., Elias-Chereque, A., Kushner, P.: Benchmarking of SWE products based on outcomes of the SnowPEx+ Intercomparison Project, The Cryosphere, 19, 201–218, https://doi.org/10.5194/tc-19-201-2025, 2025.
Mortimer, C., Mudryk, L., Derksen, C., Luojus, K., Brown, R., Kelly, R., and Tedesco, M.: Evaluation of long-term Northern Hemisphere snow water equivalent products, The Cryosphere, 14, 1579–1594, https://doi.org/10.5194/tc-14-1579-2020, 2020.
Mortimer, C. and Vionnet, V.: Northern Hemisphere in situ snow water equivalent dataset (NorSWE, 1979–2021), Earth Syst. Sci. Data Discuss. [preprint], https://doi.org/10.5194/essd-2024-602, in review, 2025.
Sincerely,
Colleen Mortimer and Lawrence Mudryk