the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A simple snow temperature index model exposes discrepancies between reanalysis snow water equivalent products
Abstract. Current global reanalyses show marked discrepancies in snow mass and snow cover extent for the Northern Hemisphere. Here, benchmark snow datasets are produced by driving a simple offline snow model, the Brown Temperature Index Model (B-TIM), with temperature and precipitation from each of three reanalyses. B-TIM offline snow performs comparably to or better than online (coupled land-atmosphere) reanalysis snow when evaluated against in situ snow measurements. Sources of discrepancy in snow climatologies, which are difficult to isolate when comparing online reanalysis snow products amongst themselves, are partially elucidated by separately bias-adjusting temperature and precipitation in B-TIM. Interannual variability in snow mass and snow spatial patterns is far more self-consistent amongst offline B-TIM snow products than amongst online reanalysis snow products, and specific artifacts related to temporal inhomogeneity in snow data assimilation are revealed in the analysis. B-TIM, released here as an open-source, self-contained Python package, provides a simple benchmarking tool for future updates to more sophisticated online and offline snow datasets.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(2383 KB)
-
Supplement
(447 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2383 KB) - Metadata XML
-
Supplement
(447 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-201', Anonymous Referee #1, 14 Mar 2024
This manuscript presents how a simple off-line snow temperature index model (B-TIM) can be considered to highlight discrepancies between snow water equivalent (SWE) products from three reanalysis (JRA-55, ERA5 and MERRA-2) and an additional product (ERA5Snow) for historical period (1980-2020). The authors used either biased, or adjusted temperature and precipitation from reanalysis as input data for B-TIM. The SWEs produced with B-TIM and various sets of input data were then compared to the SWEs produced by the reanalysis. Climatological characteristics and interannual variability were investigated. To carry out this study, they improved and translated the previous version of B-TIM and made it publicly available. This manuscript opens up the possibility of using a simple off-line model for large-scale snow cover studies.
General comments
The manuscript is well written and structured. The figures are simple, clear and concise, with an appropriate choice of colors. It could nevertheless be improved by considering the following points.
Some modifications in the structure could lighten the text and focus more on the results and the contribution of using a simple off-line model like B-TIM. For example, a large part of section 2.1 would have a more appropriate place in the SI. After all, this is not a paper about improving B-TIM, but more about using it. If this is not the case, please change the title and specify this aspect more clearly in the objectives.
It would be useful to remind the reader of the context in the results. For example, simply add a sentence to remind us that ERA5 and ERA5Snow have the same meteorology, which explains why we don't have BrE5S.
Methodological choices (e.g., bias adjustment) could be justified in greater detail, with more references where possible.
The results focus on very large domains (Northern Hemisphere, Eurasia, North America). It would have been interesting to look at these results more regionally.
Specific comments
Temperature and precipitation bias were corrected with a multiplicative factor. As precipitation is a zero-bound variable, it is generally corrected by a multiplicative method, whereas temperature is often corrected by an additive method. The choice of this method is not sufficiently justified. Can you explain thoroughly why you chose a multiplicative factor and why you apply it this way?
Figure captions contain results, whereas captions should only contain descriptions of the elements present in the figure (colors, symbols, etc.). Please remove the result part in the captions.
L102: Did you perform tests regarding the 20% of precipitation reduction?
L109: Table 2. Please find a more consistent way to present column “Model variable”. For example: « t2m (ID 167) » instead of « Parameter ID 167: "t2m" ». Add the model variable name for SWE in ERA5Snow. Also, this table could go in the SI, as it doesn't provide much relevant information to the text.
Table 2, L215, L268, L428 : Modify MERRA2 to MERRA-2.
L232: To take advantage of the fact that you have an SI, it might be interesting to present the differences in domains used for the different reanalyses (land grid points, mountainous grid points, etc.).
Fig. 3 : Please describe colors used in the scatterplots in the caption; modify ERA5-Snow to ERA5Snow; consider using hatches for ERA5Snow in the right panel and present the legend in a neutral color.
L374: Modify JRA55 to JRA-55.
L469 : Modify ERA5-Snow to ERA5Snow.
Citation: https://doi.org/10.5194/egusphere-2024-201-RC1 - AC1: 'Reply on RC1', Aleksandra Elias Chereque, 14 Jun 2024
-
RC2: 'Comment on egusphere-2024-201', Anonymous Referee #2, 07 May 2024
I echo the summary and comments of the first Reviewer, so I will not repeat them here. This is a solid analysis, but there are some points that should be addressed.
General comment:
The goal of this work (from my understanding) is to create benchmark snow datasets using an offline model, which is potentially more consistent than reanalysis or coupled model snow products that can be affected by uncertainties and errors related to forcing, data assimilation, model bias, and coupling. The authors assert that offline modeling can “isolate the role of meteorological driving” from these other issues. This is largely true. However, I would caution the authors that the model they are using still has a number of parameters whose values they chose, and which affect the snow output from the model. With a different set of parameters, the model could (or arguably will) provide a different indication of the amount of error introduced by meteorological forcing, because the model dynamics will change. So, it’s not entirely possible to disentangle forcing uncertainty from model construction and parameter uncertainty, without an exhaustive analysis of model sensitivity. I would recommend that the authors qualify their statements by noting that this is only one parameter set for this model, and their findings might be different if the parameters in Table 1 were changed.
Specific comments:
Lines 66-70: Past studies have attempted to assess the influence of various factors on snow model uncertainty, including forcing, and it would be appropriate to cite one or more here (e.g., Raleigh et al., 2015, https://doi.org/10.5194/hess-19-3153-2015).
Fig. 1: The 20% of precipitation loss seems quite arbitrary. I realize that this constant derives from the Brown et al. 2003 paper, but there is no reason to assume that this loss rate would be consistent across sites. This parameter could have a strong influence on the magnitude of snow accumulation. Can the authors give some indication of why a constant 20% is the best choice?
Fig. 1: What are delta rho_c and delta rho_w? I do not see them mentioned anywhere else in the text?
Lines 279-280: The authors state that ERA5 outperforms JRA-55 and MERRA-2, based on uRMSE and correlation. However, Figure 3g seems to show that the bias is higher for ERA5. Shouldn’t the bias be important here as well? What about raw RMSE (without removing the bias)? I would guess that most users of these datasets are unlikely to unbias them before using them.
Line 345: The authors find that the “B-TIM products provide more consistent descriptions of key snowpack climatology metrics”. This is true, but consistency does not necessarily mean accuracy. It’s possible that one of the reanalyses is a more accurate reflection of reality. The authors could use their in-situ data to evaluate this, but have not yet sufficiently done so in this manuscript.
Line 366: Why did the authors not use a more robust trend method, like Theil-Sen slope (which is less influenced than OLS by outliers, and the start and end of time series), for detrending?
Fig. 8: Interesting to make point that differences among reanalysis are greater than among B-TIM. Isn’t that kind of expected? Wouldn’t it also be informative to compare B-TIM vs. reanalysis pairs (same forcing, different models), using more than just correlation (as in Fig. 9, but using bar charts as in figure 8, for example)?
Lines 422-425 (section 4, 3rd bullet): The authors show that the B-TIM model results in “far more consistent interannual variability” than the reanalysis products. However, this does not necessarily mean that the B-TIM interannual variability is more “correct” (i.e. a more accurate representation of the true interannual variability). Can the authors show using their in-situ data that less (or more consistent) interannual variability results in greater accuracy?
Lines 457-459: Similar comment as above. The authors suggest that there is a “problem” with JRA-55. This is a strong statement to make. It’s true that JRA is the least accurate by some metrics and different from the other reanalyses, so it’s possible that the authors’ suggestion is correct. However, the authors have not shown in this manuscript that the interannual variability of JRA-55 is wrong. Maybe the interannual variability in the other reanalyses is too muted? The authors have in-situ data available to back up their statement, but they have not yet sufficiently done so in the manuscript.
Citation: https://doi.org/10.5194/egusphere-2024-201-RC2 - AC2: 'Reply on RC2', Aleksandra Elias Chereque, 14 Jun 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-201', Anonymous Referee #1, 14 Mar 2024
This manuscript presents how a simple off-line snow temperature index model (B-TIM) can be considered to highlight discrepancies between snow water equivalent (SWE) products from three reanalysis (JRA-55, ERA5 and MERRA-2) and an additional product (ERA5Snow) for historical period (1980-2020). The authors used either biased, or adjusted temperature and precipitation from reanalysis as input data for B-TIM. The SWEs produced with B-TIM and various sets of input data were then compared to the SWEs produced by the reanalysis. Climatological characteristics and interannual variability were investigated. To carry out this study, they improved and translated the previous version of B-TIM and made it publicly available. This manuscript opens up the possibility of using a simple off-line model for large-scale snow cover studies.
General comments
The manuscript is well written and structured. The figures are simple, clear and concise, with an appropriate choice of colors. It could nevertheless be improved by considering the following points.
Some modifications in the structure could lighten the text and focus more on the results and the contribution of using a simple off-line model like B-TIM. For example, a large part of section 2.1 would have a more appropriate place in the SI. After all, this is not a paper about improving B-TIM, but more about using it. If this is not the case, please change the title and specify this aspect more clearly in the objectives.
It would be useful to remind the reader of the context in the results. For example, simply add a sentence to remind us that ERA5 and ERA5Snow have the same meteorology, which explains why we don't have BrE5S.
Methodological choices (e.g., bias adjustment) could be justified in greater detail, with more references where possible.
The results focus on very large domains (Northern Hemisphere, Eurasia, North America). It would have been interesting to look at these results more regionally.
Specific comments
Temperature and precipitation bias were corrected with a multiplicative factor. As precipitation is a zero-bound variable, it is generally corrected by a multiplicative method, whereas temperature is often corrected by an additive method. The choice of this method is not sufficiently justified. Can you explain thoroughly why you chose a multiplicative factor and why you apply it this way?
Figure captions contain results, whereas captions should only contain descriptions of the elements present in the figure (colors, symbols, etc.). Please remove the result part in the captions.
L102: Did you perform tests regarding the 20% of precipitation reduction?
L109: Table 2. Please find a more consistent way to present column “Model variable”. For example: « t2m (ID 167) » instead of « Parameter ID 167: "t2m" ». Add the model variable name for SWE in ERA5Snow. Also, this table could go in the SI, as it doesn't provide much relevant information to the text.
Table 2, L215, L268, L428 : Modify MERRA2 to MERRA-2.
L232: To take advantage of the fact that you have an SI, it might be interesting to present the differences in domains used for the different reanalyses (land grid points, mountainous grid points, etc.).
Fig. 3 : Please describe colors used in the scatterplots in the caption; modify ERA5-Snow to ERA5Snow; consider using hatches for ERA5Snow in the right panel and present the legend in a neutral color.
L374: Modify JRA55 to JRA-55.
L469 : Modify ERA5-Snow to ERA5Snow.
Citation: https://doi.org/10.5194/egusphere-2024-201-RC1 - AC1: 'Reply on RC1', Aleksandra Elias Chereque, 14 Jun 2024
-
RC2: 'Comment on egusphere-2024-201', Anonymous Referee #2, 07 May 2024
I echo the summary and comments of the first Reviewer, so I will not repeat them here. This is a solid analysis, but there are some points that should be addressed.
General comment:
The goal of this work (from my understanding) is to create benchmark snow datasets using an offline model, which is potentially more consistent than reanalysis or coupled model snow products that can be affected by uncertainties and errors related to forcing, data assimilation, model bias, and coupling. The authors assert that offline modeling can “isolate the role of meteorological driving” from these other issues. This is largely true. However, I would caution the authors that the model they are using still has a number of parameters whose values they chose, and which affect the snow output from the model. With a different set of parameters, the model could (or arguably will) provide a different indication of the amount of error introduced by meteorological forcing, because the model dynamics will change. So, it’s not entirely possible to disentangle forcing uncertainty from model construction and parameter uncertainty, without an exhaustive analysis of model sensitivity. I would recommend that the authors qualify their statements by noting that this is only one parameter set for this model, and their findings might be different if the parameters in Table 1 were changed.
Specific comments:
Lines 66-70: Past studies have attempted to assess the influence of various factors on snow model uncertainty, including forcing, and it would be appropriate to cite one or more here (e.g., Raleigh et al., 2015, https://doi.org/10.5194/hess-19-3153-2015).
Fig. 1: The 20% of precipitation loss seems quite arbitrary. I realize that this constant derives from the Brown et al. 2003 paper, but there is no reason to assume that this loss rate would be consistent across sites. This parameter could have a strong influence on the magnitude of snow accumulation. Can the authors give some indication of why a constant 20% is the best choice?
Fig. 1: What are delta rho_c and delta rho_w? I do not see them mentioned anywhere else in the text?
Lines 279-280: The authors state that ERA5 outperforms JRA-55 and MERRA-2, based on uRMSE and correlation. However, Figure 3g seems to show that the bias is higher for ERA5. Shouldn’t the bias be important here as well? What about raw RMSE (without removing the bias)? I would guess that most users of these datasets are unlikely to unbias them before using them.
Line 345: The authors find that the “B-TIM products provide more consistent descriptions of key snowpack climatology metrics”. This is true, but consistency does not necessarily mean accuracy. It’s possible that one of the reanalyses is a more accurate reflection of reality. The authors could use their in-situ data to evaluate this, but have not yet sufficiently done so in this manuscript.
Line 366: Why did the authors not use a more robust trend method, like Theil-Sen slope (which is less influenced than OLS by outliers, and the start and end of time series), for detrending?
Fig. 8: Interesting to make point that differences among reanalysis are greater than among B-TIM. Isn’t that kind of expected? Wouldn’t it also be informative to compare B-TIM vs. reanalysis pairs (same forcing, different models), using more than just correlation (as in Fig. 9, but using bar charts as in figure 8, for example)?
Lines 422-425 (section 4, 3rd bullet): The authors show that the B-TIM model results in “far more consistent interannual variability” than the reanalysis products. However, this does not necessarily mean that the B-TIM interannual variability is more “correct” (i.e. a more accurate representation of the true interannual variability). Can the authors show using their in-situ data that less (or more consistent) interannual variability results in greater accuracy?
Lines 457-459: Similar comment as above. The authors suggest that there is a “problem” with JRA-55. This is a strong statement to make. It’s true that JRA is the least accurate by some metrics and different from the other reanalyses, so it’s possible that the authors’ suggestion is correct. However, the authors have not shown in this manuscript that the interannual variability of JRA-55 is wrong. Maybe the interannual variability in the other reanalyses is too muted? The authors have in-situ data available to back up their statement, but they have not yet sufficiently done so in the manuscript.
Citation: https://doi.org/10.5194/egusphere-2024-201-RC2 - AC2: 'Reply on RC2', Aleksandra Elias Chereque, 14 Jun 2024
Peer review completion
Journal article(s) based on this preprint
Data sets
Replication Data for: "A simple snow temperature index model exposes discrepancies between reanalysis snow water equivalent products" Aleksandra Elias Chereque https://doi.org/10.5683/SP3/IV6SVJ
Model code and software
Brown Temperature Index Model Aleksandra Elias Chereque https://doi.org/10.5281/zenodo.10044951
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
455 | 138 | 28 | 621 | 57 | 20 | 16 |
- HTML: 455
- PDF: 138
- XML: 28
- Total: 621
- Supplement: 57
- BibTeX: 20
- EndNote: 16
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Aleksandra Elias Chereque
Paul J. Kushner
Lawrence Mudryk
Chris Derksen
Colleen Mortimer
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2383 KB) - Metadata XML
-
Supplement
(447 KB) - BibTeX
- EndNote
- Final revised paper