the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Which global reanalysis dataset represents better in snow cover on the Tibetan Plateau?
Abstract. The extensive snow cover across the Tibetan Plateau (TP) regions has a major influence on the climate and water supply for over one billion downstream inhabitants. However, an adequate evaluation of snow cover fraction (SCF) variability over the TP simulated by global multiple reanalysis datasets has yet to be undertaken. In this study, we examined eight global reanalysis SCF datasets using the Snow Property Inversion from Remote Sensing (SPIReS) product spanning the period 2001–2020. The results reveal that the HMASR generated the best SCF simulations because of its outstanding spatial and temporal accuracy. The GLDAS and CFSR demonstrated acceptable SCF accuracy with respect to spatial variability, but struggled to reproduce the annual trend. Pronounced SCF overestimations were found when using the ERA5, ERA5L, and JRA55, but SCF was underestimated by MERRA2, and CRAL generated poor spatial pattern. Overall biases were related to the combined effect of precipitation forcing, temperature forcing, snow data assimilation, and SCF parameterization methods, with the dominant factor changing across datasets. In ERA5 and ERA5L, temperature and snowfall bias exhibited significant correlations with SCF bias over most TP areas, therefore having a greater impact on the accuracy of SCF in terms of spatial variability and temporal evolution. On the other hand, the impact of snow assimilation was possibly more pronounced in MERRA2 and CRAL. Although parameterization methods can improve SCF simulation accuracy, their influence was weaker than those of other factors, except for JRA55. To further improve the accuracy of SCF simulation, an ensemble average method was developed. The ensemble based on HMASR and GLDAS generated the most accurate SCF spatial distribution, whereas the ensemble containing ERA5L, CFSR, CRAL, GLDAS, ERA5, and MERRA2 proved optimal for capturing the annual trend.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(10128 KB)
-
Supplement
(3350 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(10128 KB) - Metadata XML
-
Supplement
(3350 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-82', Anonymous Referee #1, 05 Mar 2024
Yan et al. analyze snow cover fraction from various global reanalyses and one specific snow reanalyses for High Mountain Asia, and relate this to remotely sensed snow cover fraction and a regional meteorological dataset. The study has some potential, however, in the current stage of the manuscript I am not willing to do a full detailed review until basic scientific principles are adhered to (see major points).
Major points:
-
Introduction has many inaccuracies, and I stopped at around L70 (see also minor points below).
-
Methods are incomplete to evaluate the manuscript: study area, hydrological basins, temporal period, how was snowfall derived from TPMFD, how were the datasets on different resolutions merged, definition of hydrological year (if any), and more.
-
I’m not sure about the research aims. If HMASR is the only reanalysis from all studied that assimilates MODIS observations, and MODIS is used for evaluation, then the expected results are rather obvious.
Minor points:
-
L50: I would assume glaciers are more sensitive to long-term climate changes. Snow cover is extremely sensitive to year-to-year variability. And because of this strong interannual variability, less to long-term changes.
-
L54: Beniston et al. 2018 deals with European cryosphere, not TP.
-
L60: SPIReS is not a product, but a spectral unmixing method. And Stillinger et al. 2023 recommend spectral unmixing in general, not SPIReS in particular.
-
L69: There are no “global” meteorological agencies. Just regional, at best supranational, which produce the reanalyses.
-
At this point I grew tired of the introduction: Please check the accuracy of your statements. And in particular against the references you cite.
-
Table 1 and related text: Please explain why you defined the SCF parametrization for ERA5, ERA5L, MERRA and JRA as you did.
-
L255: Taylor diagrams do not evaluate spatial correlation.
-
Data and methods: Missing study area and period of analysis.
-
L265: Unclear how you applied the CI, since it requires a common grid, and all your datasets come with different resolutions.
-
Results: Where do the basins come from? Please introduce in methods.
-
HMASR covers only partly of the 2001-2020 period you used for the other datasets. Why not make a common period? As it is, you introduce artifacts with different periods.
Citation: https://doi.org/10.5194/egusphere-2024-82-RC1 -
AC2: 'Reply on RC1', Shirui Yan, 05 May 2024
Dear Prof. Anonymous Referee #1,
Thank you very much for your constructive suggestions, which will significantly enhance the scientific quality and language expression of our manuscript. In response to your feedback, we have made extensive revisions to the language and methodology of the manuscript. A detailed point-by-point response to your comments and the modifications made to the manuscript will be provided in the supplementary file. We look forward to the revised manuscript meeting your standards and eagerly await your further review.
Yours faithfully,
Wei Pu,
College of Atmospheric Sciences, Lanzhou University, Gansu, China
5 May, 2024
-
-
RC2: 'Comment on egusphere-2024-82', Anonymous Referee #2, 21 Mar 2024
Which global reanalysis dataset represents better in snow cover on the Tibetan Plateau?
This paper evaluates snow cover fraction (SCF) of eight reanalysis datasets over the Tibetan Plateau during 2001-2020. The SPIReS EO product is used as the reference dataset and products are evaluated based on their bias, Taylor skill score and its component metrics, and trends by means of a consistency index. The impact of biases in temperature and precipitation on forcing is assessed through comparisons of the reanalysis forcing with a reference dataset (TPMFD). An interesting analysis of SCF parameterization is also provided as are tests to identify the optimal combination of SCF products to include in an ensemble. The manuscript tries to conclude on the impact of snow data assimilation in SCF product performance.
A comparison of SCF products over the TP is an important endeavor. The manuscript is promising and contains useful and interesting analysis. However, I find the authors struggle to present their results in a way that builds logically to their stated conclusions, specifically as it relates to the role of snow data assimilation. The authors have a tendency to make general statements that do not flow logically from the evidence presented in the text. And at times the evidence presented contradicts these general statements.
The methods are not adequately described (see ‘primary concerns - Methods’). The authors struggle to interpret and communicate information contained in the Taylor plots. This may be related to the lack of clarity in how the metrics in the plots were calculated and what they represent. The text, particularly the results, could be better organized and ideas clearly separated. I encourage the authors to make better use of paragraphs. Sometimes, the references don’t support the statement as written -i.e. either not the appropriate reference or summary of the study contains inaccuracies. This may be due language issues.
Primary concerns
I am unconvinced by the statement that the reanalysis datasets incorporating snow assimilation always outperform those without snow assimilation [lines 348-351 and elsewhere]. The results, as presented, do not support this firm conclusion but rather suggest a more nuanced finding. In terms of skill scores, the top two products use snow data assimilation. However, JRA55 which also assimilates snow information has a comparable skill score to ERA5, ERA5L and CRAL which do not assimilate snow information. On the other hand, GLDAS which does not assimilate snow information has comparable skill scores to HMASR and CFSR. These findings are presented in the text and seem to contradict the general statement. I suggest the authors provide a more nuanced discussion and revisit how the ideas flow and build towards a specific conclusion. For example, later the authors tie the poor performance of JRA55 to SCF parameterization. Is it also possible that assimilation of poor snow data (JRA55) can degrade performance? Does the strong performance of GLDAS indicate that performance comparable to the best products that assimilate snow data can be achieved without snow data assimilation? Further, the above results pertain to SCF accuracy and not to the representation of trends where ERA5 and ERA5L (no snow data assimilation) performed best.
Methods –
The methods, specifically the statistical approaches, need to be clarified to adequately review the results presented in the manuscript.
Did you use anomaly fields or raw SCF values in deriving your Taylor plots? Please clarify.
‘Spatial correlation’ – was this calculated using all times and locations which would be more of a bulk correlation rather than a spatial or pattern correlation. Please clarify. Using the full time series is fine, you just need to keep this in mind when interpreting the results and remove ‘spatial’ from talk of correlation.
Standard deviation ratio – unclear what you mean by ‘degree of similarity in dispersion patterns’. Note that if you used the full time series then the variability captured by the STDR would be related to the variability in both time and space. Again, it is perfectly fine to use the full 20 year anomaly time series but the terminology and definitions need to be adjusted to reflect what was calculated.
Depending how the above metrics were calculated, the wording and interpretations in the results section may need to be revised to accurately reflect the methods.
Additionally, approaches to investigate the impact of meteorological forcings and SCF parameterization should be outlined in the methods section. Currently Methods is limited to presenting the metrics used to compare a product with a reference and does not lay out how the authors plan to meet the objectives stated in the introduction. For example, ~L372-375 you mention that you investigate SCF bias by looking at snowfall, temperature, and parameterization thresholds but you do not clearly lay out how you conducted this investigation. Instead, it jumps right into results. This would be ok if the methods were outlined in the methods section but they are not. What methods were used? Some of what is in the results could be moved to the methods.
For precipitation forcing, did you use snowfall or total precipitation? Please clarify.
Additional comments
Suggest separating analysis of impact of meteoritical forcing and SCF parameterization
L286-288: How did you select these two products as being the ‘worst’? If the comparison is relative to SPIRES which has a TP SCF average of 0.13 then MERRA2 at 0.05 is only 0.08 away from SPIRES whereas ERA5 (0.41), ERA5L and JRA55 all appear to have larger absolute biases (with respect to SPIRES reference) compared to MERRA2. I can’t judge from the plot where CFSR falls. I think what you are pointing out is the product with the largest positive bias (ERA5) and the product with the largest negative bias (MERRA2).
Include mention of the STDR of the various products, not just the correlation on the Taylor plots. The skill score is more than just the correlation, there is some mention of this on Line 300.
The linkages between the maps (Figures 1a and 2a) and absolute bias to the STDR is tricky because it’s not clear how it was calculated. i.e. does the STDR reflect the variability in both time and space or just space?
L300 – add a few words about what an STDR close to 1 means.
Description of product performance could be a bit more nuanced: Although HMSAR is good, and ranks at or near the top in all assessments, it appears to over(under) estimate in the east(west) compared to the SPIRES dataset. These biases probably average out when looking at the TP as a whole. Further, HMSAR has too little spatial variability (low STDR) compared to the reference while GLDAS has a more appropriate amount of ‘variability’ with STDR of close to 1.
L306-307: unclear what you mean by ‘poor Taylor performance’. From Fig2b&c it that E5 captures the amount of variability well enough (STDR) but its large bias results in a high RMSD (and low skill score). I’d try to be more specific about which elements you are talking about.
L329-330: unclear what you mean by ‘scattered on the Taylor diagram panels’. Please be more specific. i.e. there is a large spread in STDR, RMSD is higher compared to the westerly basins, correlations are fairly consistent between 0.5 and 0.6 (this is just by eye).
L331-332: Do you mean the regional average SCF?
L333-334: Greater deviations compared to what? Unclear here and not obvious from the plots.
L367: start new paragraph with ‘Variations’.
L~415 – MERRA2 and CRAL seem to have strong correlations with temperature and precipitation in the west. I’m not convinced the evidence provided is sufficient to conclude that it is solely the absence of snow assimilation. Maybe use slightly more nuanced language.
L439: cut ‘leading to a better Taylor performance’ and instead just reference the skill scores.
L510-512: You just described how ERA5 and eRA5L have the best agreement in terms of trends and those products don’t assimilate snow data. So, the low performance of CRAL might not only be because of lack of data assimilation. At least, not if I follow the logic as presented in your paper. It might be that I am missing something or that the flow of ideas needs to be revised to support your conclusion.
Section 4.1 – You discuss in terms of good and bad but what do you mean by this? How much can a parameterization alter the SCF and the accompanying SS and CI? Is this the same for all products? When you say it doesn’t change the general performance do you mean the rankings of the products or the SS/CI values or both?
Section 4.2 – not surprising that HMSAR+GLDAS produces the best skill scores because they had the best skill scores already. None of the other products had equivalent but opposite biases of these two products that would even out and improve the overall performance.
L61: more advanced compared to the standard Modis SCF algorithm?
L73: Unclear which reanalysis dataset you are referring to. Reanalysis datasets in general?
L84-86: unclear. Do you mean reanalysis datasets that assimilate IMS and/or ground data?
L139: specify ‘over mountain areas in the western United States’.
L195: weekly
Citation: https://doi.org/10.5194/egusphere-2024-82-RC2 -
AC1: 'Reply on RC2', Shirui Yan, 05 May 2024
Dear Prof. Anonymous Referee #2,
Thank you very much for your positive feedback and valuable suggestions, which have greatly contributed to improving the quality of our manuscript. A detailed point-by-point response to your comments and the modifications made to the manuscript will be provided in the supplementary file. We look forward to the revised manuscript meeting your standards and eagerly await your further review.
Yours faithfully,
Wei Pu,
College of Atmospheric Sciences, Lanzhou University, Gansu, China
5 May, 2024
-
AC1: 'Reply on RC2', Shirui Yan, 05 May 2024
-
RC3: 'Comment on egusphere-2024-82', Anonymous Referee #3, 24 Mar 2024
This study comprehensively evaluated the snow cover fraction (SCF) of eight reanalysis datasets over the Tibetan Plateau based on selected remote sensing products. The authors found that each dataset demonstrates distinct characteristics in describing the SCF. Combining interpolation of atmosphere variables and parameterizations, the authors systematically investigate potential causes of SCF disagreements among these datasets. Additionally, an ensemble algorithm was developed to optimize SCF. In addition to being appropriate for The Cryosphere, this article offers significant implications for future research on data guidance. I could recommend publishing the manuscript in The Cryosphere after essential revisions.
Major comments:
1) Language
I strongly agree with reviewer #1 that the language needs to be significantly improved. I’m not a native speaker, so I leave the work to the authors.
2) Reference datasets
The data assessment and manuscript quality are largely based on the SPIReS dataset. It would be helpful if the authors could provide further clarification regarding this dataset's representativeness. Why is SPIReS chosen as a reference here? At the very least, a summary review of RS products is needed. It would also be nice to add SPIReS (and TPMFD) to Table 1.
3) Manuscipt structure
Sec. 3 Results
Please make the results sharp and use numbers to promote your results. Generally, the results section lacks qualitative descriptions and remains subjective. Please leave the discussion to the discussion section and present only results here.
Sec. 5 Conclusions
Generally, the conclusions are very specific and lack an overall assessment of the performance of state-of-the-art reanalyses. Furthermore, the conclusion and summary are largely similar to the abstract.
Specific comments:
L22: Please define ERA5L
L71: Please add relvelent reference here.
L109: from 2 to 26 cm
L173: HTESSE“L”
L225: Please provide the full name of the CRA-Land dataset, similar to the other reanalysis datasets mentioned in the manuscript.
L240: Important infomration of TPMFD dataset is not availbale. What variables are used in this data, what is the resolution and temporal coverage, how is snowfall derived, etc. This could be easily done by adding TPMFD to table 1.
L372: Jiang et al., 2020 reported to the significant simulation biases for SCF over the TP. Please consider citing here.
L404: Remove “This suggests the presence of another significant factor that is responsible for the overestimation of SCF in JRA55”
L413: change “the important role of ” to “the importantance”
Table 1: Please also add the reference dataset here.
Figure 2: "Tibetan Plateau region" is used in many figure captions. Please remove the “region” and revise throughout the manuscript.
Figure 4: Instead of using both, I suggest using Autumn/Winter/Spring/Summer OR SON/DJF/MAM/JJA.
Figure 6: To improve the compariability, I would suggest changing the y-axis range of subpolt b the same.
Figure 10, the abbreviation for the reanalyzed dataset is not given.
References
Jiang, Y., Chen, F., Gao, Y., He, C., Barlage, M., Huang, W., 2020. Assessment of Uncertainty Sources in Snow Cover Simulation in the Tibetan Plateau. JGR Atmospheres 125, e2020JD032674. https://doi.org/10.1029/2020JD032674
Citation: https://doi.org/10.5194/egusphere-2024-82-RC3 -
AC3: 'Reply on RC3', Shirui Yan, 05 May 2024
Dear Prof. Anonymous Referee #3,
We greatly appreciate your positive feedback and constructive suggestions, which have significantly contributed to the improvement of our manuscript. Additionally, we carefully reviewed each comment and made corresponding adjustments and improvements to the manuscript. A detailed point-by-point response to your comments and the modifications made to the manuscript will be provided in the supplementary file. We look forward to the revised manuscript meeting your standards and eagerly await your further review.
Yours faithfully,
Wei Pu,
College of Atmospheric Sciences, Lanzhou University, Gansu, China
5 May, 2024
-
AC3: 'Reply on RC3', Shirui Yan, 05 May 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-82', Anonymous Referee #1, 05 Mar 2024
Yan et al. analyze snow cover fraction from various global reanalyses and one specific snow reanalyses for High Mountain Asia, and relate this to remotely sensed snow cover fraction and a regional meteorological dataset. The study has some potential, however, in the current stage of the manuscript I am not willing to do a full detailed review until basic scientific principles are adhered to (see major points).
Major points:
-
Introduction has many inaccuracies, and I stopped at around L70 (see also minor points below).
-
Methods are incomplete to evaluate the manuscript: study area, hydrological basins, temporal period, how was snowfall derived from TPMFD, how were the datasets on different resolutions merged, definition of hydrological year (if any), and more.
-
I’m not sure about the research aims. If HMASR is the only reanalysis from all studied that assimilates MODIS observations, and MODIS is used for evaluation, then the expected results are rather obvious.
Minor points:
-
L50: I would assume glaciers are more sensitive to long-term climate changes. Snow cover is extremely sensitive to year-to-year variability. And because of this strong interannual variability, less to long-term changes.
-
L54: Beniston et al. 2018 deals with European cryosphere, not TP.
-
L60: SPIReS is not a product, but a spectral unmixing method. And Stillinger et al. 2023 recommend spectral unmixing in general, not SPIReS in particular.
-
L69: There are no “global” meteorological agencies. Just regional, at best supranational, which produce the reanalyses.
-
At this point I grew tired of the introduction: Please check the accuracy of your statements. And in particular against the references you cite.
-
Table 1 and related text: Please explain why you defined the SCF parametrization for ERA5, ERA5L, MERRA and JRA as you did.
-
L255: Taylor diagrams do not evaluate spatial correlation.
-
Data and methods: Missing study area and period of analysis.
-
L265: Unclear how you applied the CI, since it requires a common grid, and all your datasets come with different resolutions.
-
Results: Where do the basins come from? Please introduce in methods.
-
HMASR covers only partly of the 2001-2020 period you used for the other datasets. Why not make a common period? As it is, you introduce artifacts with different periods.
Citation: https://doi.org/10.5194/egusphere-2024-82-RC1 -
AC2: 'Reply on RC1', Shirui Yan, 05 May 2024
Dear Prof. Anonymous Referee #1,
Thank you very much for your constructive suggestions, which will significantly enhance the scientific quality and language expression of our manuscript. In response to your feedback, we have made extensive revisions to the language and methodology of the manuscript. A detailed point-by-point response to your comments and the modifications made to the manuscript will be provided in the supplementary file. We look forward to the revised manuscript meeting your standards and eagerly await your further review.
Yours faithfully,
Wei Pu,
College of Atmospheric Sciences, Lanzhou University, Gansu, China
5 May, 2024
-
-
RC2: 'Comment on egusphere-2024-82', Anonymous Referee #2, 21 Mar 2024
Which global reanalysis dataset represents better in snow cover on the Tibetan Plateau?
This paper evaluates snow cover fraction (SCF) of eight reanalysis datasets over the Tibetan Plateau during 2001-2020. The SPIReS EO product is used as the reference dataset and products are evaluated based on their bias, Taylor skill score and its component metrics, and trends by means of a consistency index. The impact of biases in temperature and precipitation on forcing is assessed through comparisons of the reanalysis forcing with a reference dataset (TPMFD). An interesting analysis of SCF parameterization is also provided as are tests to identify the optimal combination of SCF products to include in an ensemble. The manuscript tries to conclude on the impact of snow data assimilation in SCF product performance.
A comparison of SCF products over the TP is an important endeavor. The manuscript is promising and contains useful and interesting analysis. However, I find the authors struggle to present their results in a way that builds logically to their stated conclusions, specifically as it relates to the role of snow data assimilation. The authors have a tendency to make general statements that do not flow logically from the evidence presented in the text. And at times the evidence presented contradicts these general statements.
The methods are not adequately described (see ‘primary concerns - Methods’). The authors struggle to interpret and communicate information contained in the Taylor plots. This may be related to the lack of clarity in how the metrics in the plots were calculated and what they represent. The text, particularly the results, could be better organized and ideas clearly separated. I encourage the authors to make better use of paragraphs. Sometimes, the references don’t support the statement as written -i.e. either not the appropriate reference or summary of the study contains inaccuracies. This may be due language issues.
Primary concerns
I am unconvinced by the statement that the reanalysis datasets incorporating snow assimilation always outperform those without snow assimilation [lines 348-351 and elsewhere]. The results, as presented, do not support this firm conclusion but rather suggest a more nuanced finding. In terms of skill scores, the top two products use snow data assimilation. However, JRA55 which also assimilates snow information has a comparable skill score to ERA5, ERA5L and CRAL which do not assimilate snow information. On the other hand, GLDAS which does not assimilate snow information has comparable skill scores to HMASR and CFSR. These findings are presented in the text and seem to contradict the general statement. I suggest the authors provide a more nuanced discussion and revisit how the ideas flow and build towards a specific conclusion. For example, later the authors tie the poor performance of JRA55 to SCF parameterization. Is it also possible that assimilation of poor snow data (JRA55) can degrade performance? Does the strong performance of GLDAS indicate that performance comparable to the best products that assimilate snow data can be achieved without snow data assimilation? Further, the above results pertain to SCF accuracy and not to the representation of trends where ERA5 and ERA5L (no snow data assimilation) performed best.
Methods –
The methods, specifically the statistical approaches, need to be clarified to adequately review the results presented in the manuscript.
Did you use anomaly fields or raw SCF values in deriving your Taylor plots? Please clarify.
‘Spatial correlation’ – was this calculated using all times and locations which would be more of a bulk correlation rather than a spatial or pattern correlation. Please clarify. Using the full time series is fine, you just need to keep this in mind when interpreting the results and remove ‘spatial’ from talk of correlation.
Standard deviation ratio – unclear what you mean by ‘degree of similarity in dispersion patterns’. Note that if you used the full time series then the variability captured by the STDR would be related to the variability in both time and space. Again, it is perfectly fine to use the full 20 year anomaly time series but the terminology and definitions need to be adjusted to reflect what was calculated.
Depending how the above metrics were calculated, the wording and interpretations in the results section may need to be revised to accurately reflect the methods.
Additionally, approaches to investigate the impact of meteorological forcings and SCF parameterization should be outlined in the methods section. Currently Methods is limited to presenting the metrics used to compare a product with a reference and does not lay out how the authors plan to meet the objectives stated in the introduction. For example, ~L372-375 you mention that you investigate SCF bias by looking at snowfall, temperature, and parameterization thresholds but you do not clearly lay out how you conducted this investigation. Instead, it jumps right into results. This would be ok if the methods were outlined in the methods section but they are not. What methods were used? Some of what is in the results could be moved to the methods.
For precipitation forcing, did you use snowfall or total precipitation? Please clarify.
Additional comments
Suggest separating analysis of impact of meteoritical forcing and SCF parameterization
L286-288: How did you select these two products as being the ‘worst’? If the comparison is relative to SPIRES which has a TP SCF average of 0.13 then MERRA2 at 0.05 is only 0.08 away from SPIRES whereas ERA5 (0.41), ERA5L and JRA55 all appear to have larger absolute biases (with respect to SPIRES reference) compared to MERRA2. I can’t judge from the plot where CFSR falls. I think what you are pointing out is the product with the largest positive bias (ERA5) and the product with the largest negative bias (MERRA2).
Include mention of the STDR of the various products, not just the correlation on the Taylor plots. The skill score is more than just the correlation, there is some mention of this on Line 300.
The linkages between the maps (Figures 1a and 2a) and absolute bias to the STDR is tricky because it’s not clear how it was calculated. i.e. does the STDR reflect the variability in both time and space or just space?
L300 – add a few words about what an STDR close to 1 means.
Description of product performance could be a bit more nuanced: Although HMSAR is good, and ranks at or near the top in all assessments, it appears to over(under) estimate in the east(west) compared to the SPIRES dataset. These biases probably average out when looking at the TP as a whole. Further, HMSAR has too little spatial variability (low STDR) compared to the reference while GLDAS has a more appropriate amount of ‘variability’ with STDR of close to 1.
L306-307: unclear what you mean by ‘poor Taylor performance’. From Fig2b&c it that E5 captures the amount of variability well enough (STDR) but its large bias results in a high RMSD (and low skill score). I’d try to be more specific about which elements you are talking about.
L329-330: unclear what you mean by ‘scattered on the Taylor diagram panels’. Please be more specific. i.e. there is a large spread in STDR, RMSD is higher compared to the westerly basins, correlations are fairly consistent between 0.5 and 0.6 (this is just by eye).
L331-332: Do you mean the regional average SCF?
L333-334: Greater deviations compared to what? Unclear here and not obvious from the plots.
L367: start new paragraph with ‘Variations’.
L~415 – MERRA2 and CRAL seem to have strong correlations with temperature and precipitation in the west. I’m not convinced the evidence provided is sufficient to conclude that it is solely the absence of snow assimilation. Maybe use slightly more nuanced language.
L439: cut ‘leading to a better Taylor performance’ and instead just reference the skill scores.
L510-512: You just described how ERA5 and eRA5L have the best agreement in terms of trends and those products don’t assimilate snow data. So, the low performance of CRAL might not only be because of lack of data assimilation. At least, not if I follow the logic as presented in your paper. It might be that I am missing something or that the flow of ideas needs to be revised to support your conclusion.
Section 4.1 – You discuss in terms of good and bad but what do you mean by this? How much can a parameterization alter the SCF and the accompanying SS and CI? Is this the same for all products? When you say it doesn’t change the general performance do you mean the rankings of the products or the SS/CI values or both?
Section 4.2 – not surprising that HMSAR+GLDAS produces the best skill scores because they had the best skill scores already. None of the other products had equivalent but opposite biases of these two products that would even out and improve the overall performance.
L61: more advanced compared to the standard Modis SCF algorithm?
L73: Unclear which reanalysis dataset you are referring to. Reanalysis datasets in general?
L84-86: unclear. Do you mean reanalysis datasets that assimilate IMS and/or ground data?
L139: specify ‘over mountain areas in the western United States’.
L195: weekly
Citation: https://doi.org/10.5194/egusphere-2024-82-RC2 -
AC1: 'Reply on RC2', Shirui Yan, 05 May 2024
Dear Prof. Anonymous Referee #2,
Thank you very much for your positive feedback and valuable suggestions, which have greatly contributed to improving the quality of our manuscript. A detailed point-by-point response to your comments and the modifications made to the manuscript will be provided in the supplementary file. We look forward to the revised manuscript meeting your standards and eagerly await your further review.
Yours faithfully,
Wei Pu,
College of Atmospheric Sciences, Lanzhou University, Gansu, China
5 May, 2024
-
AC1: 'Reply on RC2', Shirui Yan, 05 May 2024
-
RC3: 'Comment on egusphere-2024-82', Anonymous Referee #3, 24 Mar 2024
This study comprehensively evaluated the snow cover fraction (SCF) of eight reanalysis datasets over the Tibetan Plateau based on selected remote sensing products. The authors found that each dataset demonstrates distinct characteristics in describing the SCF. Combining interpolation of atmosphere variables and parameterizations, the authors systematically investigate potential causes of SCF disagreements among these datasets. Additionally, an ensemble algorithm was developed to optimize SCF. In addition to being appropriate for The Cryosphere, this article offers significant implications for future research on data guidance. I could recommend publishing the manuscript in The Cryosphere after essential revisions.
Major comments:
1) Language
I strongly agree with reviewer #1 that the language needs to be significantly improved. I’m not a native speaker, so I leave the work to the authors.
2) Reference datasets
The data assessment and manuscript quality are largely based on the SPIReS dataset. It would be helpful if the authors could provide further clarification regarding this dataset's representativeness. Why is SPIReS chosen as a reference here? At the very least, a summary review of RS products is needed. It would also be nice to add SPIReS (and TPMFD) to Table 1.
3) Manuscipt structure
Sec. 3 Results
Please make the results sharp and use numbers to promote your results. Generally, the results section lacks qualitative descriptions and remains subjective. Please leave the discussion to the discussion section and present only results here.
Sec. 5 Conclusions
Generally, the conclusions are very specific and lack an overall assessment of the performance of state-of-the-art reanalyses. Furthermore, the conclusion and summary are largely similar to the abstract.
Specific comments:
L22: Please define ERA5L
L71: Please add relvelent reference here.
L109: from 2 to 26 cm
L173: HTESSE“L”
L225: Please provide the full name of the CRA-Land dataset, similar to the other reanalysis datasets mentioned in the manuscript.
L240: Important infomration of TPMFD dataset is not availbale. What variables are used in this data, what is the resolution and temporal coverage, how is snowfall derived, etc. This could be easily done by adding TPMFD to table 1.
L372: Jiang et al., 2020 reported to the significant simulation biases for SCF over the TP. Please consider citing here.
L404: Remove “This suggests the presence of another significant factor that is responsible for the overestimation of SCF in JRA55”
L413: change “the important role of ” to “the importantance”
Table 1: Please also add the reference dataset here.
Figure 2: "Tibetan Plateau region" is used in many figure captions. Please remove the “region” and revise throughout the manuscript.
Figure 4: Instead of using both, I suggest using Autumn/Winter/Spring/Summer OR SON/DJF/MAM/JJA.
Figure 6: To improve the compariability, I would suggest changing the y-axis range of subpolt b the same.
Figure 10, the abbreviation for the reanalyzed dataset is not given.
References
Jiang, Y., Chen, F., Gao, Y., He, C., Barlage, M., Huang, W., 2020. Assessment of Uncertainty Sources in Snow Cover Simulation in the Tibetan Plateau. JGR Atmospheres 125, e2020JD032674. https://doi.org/10.1029/2020JD032674
Citation: https://doi.org/10.5194/egusphere-2024-82-RC3 -
AC3: 'Reply on RC3', Shirui Yan, 05 May 2024
Dear Prof. Anonymous Referee #3,
We greatly appreciate your positive feedback and constructive suggestions, which have significantly contributed to the improvement of our manuscript. Additionally, we carefully reviewed each comment and made corresponding adjustments and improvements to the manuscript. A detailed point-by-point response to your comments and the modifications made to the manuscript will be provided in the supplementary file. We look forward to the revised manuscript meeting your standards and eagerly await your further review.
Yours faithfully,
Wei Pu,
College of Atmospheric Sciences, Lanzhou University, Gansu, China
5 May, 2024
-
AC3: 'Reply on RC3', Shirui Yan, 05 May 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
513 | 178 | 38 | 729 | 52 | 21 | 42 |
- HTML: 513
- PDF: 178
- XML: 38
- Total: 729
- Supplement: 52
- BibTeX: 21
- EndNote: 42
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Shirui Yan
Yang Chen
Yaliang Hou
Xuejing Li
Yuxuan Xing
Dongyou Wu
Jiecan Cui
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(10128 KB) - Metadata XML
-
Supplement
(3350 KB) - BibTeX
- EndNote
- Final revised paper