the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Benchmarking of SWE products based on outcomes of the SnowPEx+ Intercomparison Project
Abstract. We assess and rank 23 gridded snow water equivalent (SWE) products by implementing a novel evaluation strategy using a new suite of reference data from two cross-validated sources and a series of product inter-comparisons. The new reference data combines in situ measurements from both snow courses and airborne gamma measurements. Compared to previous evaluations of gridded products, we have substantially increased the spatial coverage and sample size across North America, and we are able to evaluate product performance across both mountain and non-mountain regions. The evaluation strategy we use ranks overall relative product performance while still accounting for individual differences in ability to represent SWE climatology, variability, and trends. Assessing these gridded products fills an important gap in the literature since individual gridded products are frequently chosen without prior justification as the basis for evaluating land surface and climate model outputs, along with other climate applications. The top performing products across the range of tests performed are ERA5-Land followed by the Crocus snow model. Our evaluation indicates that accurate representation of hemispheric SWE varies tremendously across the range of products. While most products are able to represent SWE reasonably well across Northern Hemisphere non-mountainous regions, the ability to accurately represent SWE in mountain regions and to accurately represent historical trends are much more variable. Finally, we demonstrate that for the ensemble of products evaluated here, attempts to assimilate surface snow observations and/or satellite measurements lead to a deleterious influence on regional snow mass trends, which is an important consideration for how such gridded products are produced and applied in the future.
- Preprint
(2297 KB) - Metadata XML
-
Supplement
(761 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2023-3014', Anonymous Referee #1, 17 Mar 2024
Mudryk et al. aim to evaluate 23 different SWE products based on how well they represent SWE climatology, variability, and trends across mountainous and non-mountainous regions in North America and Eurasia. Using existing and newly created reference datasets, the gridded products are scored using skill target diagrams, resulting in a series of Taylor and target plots, eventually leading to an average ranking of the SWE products. The methodology and technical approach to this evaluation is clear. However, the presentation of the datasets, methodology, and results is notably convoluted or disorganized at times, dampening the impact that this thorough analysis could have. In particular, a clear workflow figure could aid in the introduction of the overall evaluation strategy, where related text often references several other sections, causing for much back-and-forth within the manuscript. While repetition in stating the methodology is appreciated, sometimes the methods, results and associated discussion appear in a single section, making the information challenging to process. Thus, I include no major analysis comments and suggest the authors primarily focus on restructuring the manuscript for a clearer portrayal of meaningful results. My more pointed comments attached should help with these review items.
- AC3: 'Reply on RC1', Lawrence Mudryk, 20 Jun 2024
-
RC2: 'Comment on egusphere-2023-3014', Anonymous Referee #2, 14 Apr 2024
Mudryk at al. presents an extensive and impressive comparison and evaluation of several gridded SWE data which I enjoyed reading. It provides an excellent and much welcomed overview of the performance of major global SWE data sets.
The paper is well written and structured. Even if the reported study is extensive it is sometimes challenging for the reader to follow all details, it gives valuable insight. The conclusions are clear and concise.
Even though the paper keeps a high standard, I have a few concerns that the authors should reflect on.
Many of the SWE data applied in this study are based on snow accumulation/melt algorithms embedded in different reanalysis models. The derived SWE is therefore a result of the forcing data, primarily temperature and precipitation. When comparing SWE data from these different sources, I miss a discussion on the performance (evaluation scores) of the forcing data used for the various approaches since the SWE estimates will inherit some of their characteristics.
Another issue I feel is almost neglected in the discussion is the role of the native resolution of the gridded datasets. Snow is a property that shows large spatial and temporal variability. Even though the comparison is performed on a joint 0.5°x0.5° grid, the original resolution should have an impact on the estimates. The native resolution of the data sets should be added in table 1.
A related issue is regarding the benchmark data. It is quite obvious the in-situ data is unevenly distributed in space and also between the regions discussed in the paper. How will the unevenly (and sparsely) distributed snow course data impact the reference data set. And to which extent will that influence the evaluation scores? Further considerations about that would strengthen the paper.
Further I miss a reflection on the spatial scale of snow cover in mountainous regions. Performing a comparison on 0.5°x0.5° grids will smooth out the natural variability in complex terrain. I think that should be more thoroughly analysed and discussed.
In the beginning of Chapter 2.3 the paper would benefit from a brief introduction in order to prepare and give the readers an idea of the information presented in the next sections.
Line 49: Mortimer et al. 2024 is missing in the reference list (see also comment to line 290)
Line 57: The term “authoritative” is pretty ambiguous. I would recommend using a more moderate term ;-)
Table 1: Add a column with original resolutions.
Line 109: Explain IMS.
Line 177. The expression “....method tends to sample” is vague. Please be more specific.
Line 244 (and further lines 487, 489). For consistency, please upcase CCI ( to SnowCCI)
Line 290: Mortimer et. al, 2023 is missing in the reference list. (see also comment to line 49. If this refers to the paper referred to as submitted in the references I would recommend to be consistent with the references…)
Figure 3 contains a lot of information. For easier interpretation I would recommend to add axis titles in all panels.
Figure 6: Please explain the term FM.
Line 428 (Chapter 3.3). Please be more consistent with the use of “mountainous” and/or “alpine”. Maybe stick to one of them?
Line 438 - 440: Why is that causing these anomalous trends? Please add an explanation.
Line 483-485: Is that really the case in all regions? Justification in a graph similar to fig 1a separated into domains would be a nice supplement.
Line 532- . Here I feel the authors are speculating instead of pointing at real properties of the input data. Lines 532-538 need a second look, and maybe rephrasing in order to be more concrete.
Line 546: I like it!
Citation: https://doi.org/10.5194/egusphere-2023-3014-RC2 - AC1: 'Reply on RC2', Lawrence Mudryk, 20 Jun 2024
-
AC2: 'Reply on RC2', Lawrence Mudryk, 20 Jun 2024
Thank you for your feedback. We have provided our responses in the document.
Citation: https://doi.org/10.5194/egusphere-2023-3014-AC2
Status: closed
-
RC1: 'Comment on egusphere-2023-3014', Anonymous Referee #1, 17 Mar 2024
Mudryk et al. aim to evaluate 23 different SWE products based on how well they represent SWE climatology, variability, and trends across mountainous and non-mountainous regions in North America and Eurasia. Using existing and newly created reference datasets, the gridded products are scored using skill target diagrams, resulting in a series of Taylor and target plots, eventually leading to an average ranking of the SWE products. The methodology and technical approach to this evaluation is clear. However, the presentation of the datasets, methodology, and results is notably convoluted or disorganized at times, dampening the impact that this thorough analysis could have. In particular, a clear workflow figure could aid in the introduction of the overall evaluation strategy, where related text often references several other sections, causing for much back-and-forth within the manuscript. While repetition in stating the methodology is appreciated, sometimes the methods, results and associated discussion appear in a single section, making the information challenging to process. Thus, I include no major analysis comments and suggest the authors primarily focus on restructuring the manuscript for a clearer portrayal of meaningful results. My more pointed comments attached should help with these review items.
- AC3: 'Reply on RC1', Lawrence Mudryk, 20 Jun 2024
-
RC2: 'Comment on egusphere-2023-3014', Anonymous Referee #2, 14 Apr 2024
Mudryk at al. presents an extensive and impressive comparison and evaluation of several gridded SWE data which I enjoyed reading. It provides an excellent and much welcomed overview of the performance of major global SWE data sets.
The paper is well written and structured. Even if the reported study is extensive it is sometimes challenging for the reader to follow all details, it gives valuable insight. The conclusions are clear and concise.
Even though the paper keeps a high standard, I have a few concerns that the authors should reflect on.
Many of the SWE data applied in this study are based on snow accumulation/melt algorithms embedded in different reanalysis models. The derived SWE is therefore a result of the forcing data, primarily temperature and precipitation. When comparing SWE data from these different sources, I miss a discussion on the performance (evaluation scores) of the forcing data used for the various approaches since the SWE estimates will inherit some of their characteristics.
Another issue I feel is almost neglected in the discussion is the role of the native resolution of the gridded datasets. Snow is a property that shows large spatial and temporal variability. Even though the comparison is performed on a joint 0.5°x0.5° grid, the original resolution should have an impact on the estimates. The native resolution of the data sets should be added in table 1.
A related issue is regarding the benchmark data. It is quite obvious the in-situ data is unevenly distributed in space and also between the regions discussed in the paper. How will the unevenly (and sparsely) distributed snow course data impact the reference data set. And to which extent will that influence the evaluation scores? Further considerations about that would strengthen the paper.
Further I miss a reflection on the spatial scale of snow cover in mountainous regions. Performing a comparison on 0.5°x0.5° grids will smooth out the natural variability in complex terrain. I think that should be more thoroughly analysed and discussed.
In the beginning of Chapter 2.3 the paper would benefit from a brief introduction in order to prepare and give the readers an idea of the information presented in the next sections.
Line 49: Mortimer et al. 2024 is missing in the reference list (see also comment to line 290)
Line 57: The term “authoritative” is pretty ambiguous. I would recommend using a more moderate term ;-)
Table 1: Add a column with original resolutions.
Line 109: Explain IMS.
Line 177. The expression “....method tends to sample” is vague. Please be more specific.
Line 244 (and further lines 487, 489). For consistency, please upcase CCI ( to SnowCCI)
Line 290: Mortimer et. al, 2023 is missing in the reference list. (see also comment to line 49. If this refers to the paper referred to as submitted in the references I would recommend to be consistent with the references…)
Figure 3 contains a lot of information. For easier interpretation I would recommend to add axis titles in all panels.
Figure 6: Please explain the term FM.
Line 428 (Chapter 3.3). Please be more consistent with the use of “mountainous” and/or “alpine”. Maybe stick to one of them?
Line 438 - 440: Why is that causing these anomalous trends? Please add an explanation.
Line 483-485: Is that really the case in all regions? Justification in a graph similar to fig 1a separated into domains would be a nice supplement.
Line 532- . Here I feel the authors are speculating instead of pointing at real properties of the input data. Lines 532-538 need a second look, and maybe rephrasing in order to be more concrete.
Line 546: I like it!
Citation: https://doi.org/10.5194/egusphere-2023-3014-RC2 - AC1: 'Reply on RC2', Lawrence Mudryk, 20 Jun 2024
-
AC2: 'Reply on RC2', Lawrence Mudryk, 20 Jun 2024
Thank you for your feedback. We have provided our responses in the document.
Citation: https://doi.org/10.5194/egusphere-2023-3014-AC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
467 | 168 | 28 | 663 | 64 | 16 | 15 |
- HTML: 467
- PDF: 168
- XML: 28
- Total: 663
- Supplement: 64
- BibTeX: 16
- EndNote: 15
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1