the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Investigating the impact of reanalysis snow input on an observationally calibrated snow-on-sea-ice reconstruction
Abstract. A key uncertainty in reanalysis-based snow-on-sea-ice reconstructions is the choice of reanalysis product used for snowfall input. Although reanalysis products have many similarities in their precipitation output over the Arctic Ocean, they nevertheless have relative biases that impact derived snow-on-sea-ice estimates. In this study, snowfall from the ERA5, JRA-55 and MERRA-2 reanalysis products is used as input to the NASA Eulerian Snow On Sea Ice Model (NESOSIM). A Markov chain Monte Carlo (MCMC) approach is used to calibrate the wind packing and blowing snow parameters in NESOSIM run with these different snowfall inputs. A multi-input-averaged snow-on-sea-ice product is then constructed from NESOSIM run with the three reanalysis products. JRA-55 shows the largest departure from the previously-used values (Bayesian priors) when the MCMC calibration is run, and also has the largest posterior uncertainty due to parameter uncertainties. The MCMC calibration reconciles snow depths between NESOSIM run with different reanalysis snowfall inputs, but produces larger discrepancies in snow densities, due to the sensitivity of snow density in NESOSIM to parameter values and weak observational constraints on density. Regional climatologies and trends in the calibrated products are examined and compared to another reanalysis-based snow-on-sea-ice reconstruction, SnowModel-LG. NESOSIM and SnowModel-LG show close agreement in snow depth climatologies in the Central Arctic Ocean region, but differ more in peripheral seas. Trends are found to be region-dependent, and the magnitude of Central Arctic Ocean snow depth trends is more sensitive to the choice of reanalysis input than to the choice of model.
- Preprint
(19681 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 16 Nov 2024)
-
RC1: 'Comment on egusphere-2024-2562', Anonymous Referee #1, 01 Oct 2024
reply
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-2562/egusphere-2024-2562-RC1-supplement.pdf
-
RC2: 'Comment on egusphere-2024-2562', Anonymous Referee #2, 08 Oct 2024
reply
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-2562/egusphere-2024-2562-RC2-supplement.pdf
-
RC3: 'Comment on egusphere-2024-2562', Anonymous Referee #3, 22 Oct 2024
reply
# Review: Cabaj et al, Investigating the impact of reanalysis snow input on an observationally calibrated snow-on-sea-ice reconstruction
## General
The paper investigates the impact of snowfall rates from three reanalysis products on two calibrated parameter values in the NASA Eulerian Snow on Sea Ice Model (NESOSIM).  NESOSIM is a two-layer snow budget model used to calculate depth and density of snow on sea ice for polar (mostly Arctic) oceans.  NESOSIM is forced by snowfall, wind speed, sea ice concentration, and ice motion.  It accounts for horizontal transport of snow cover by ice drift, densification and metamorphosis of the snow pack, and loss of snow cover to leads through wind erosion and transport.  The current paper focuses on calibrating paramaters that control these last two processes, and on correcting reanalysis snowfall.  It builds on two earlier papers: one paper that presented a snowfall bias correction procedure using CloudSat as the target snowfall; and a second paper that presented a Markov Chain Monte Carlo automated calibration method.
It is not clear to me what new insights have been gained by the current study or if any new method or solution is presented. Â The automated calibration procedure using Monte Carlo Markov Chain is described in Cabaj et al (2023) and the snowfall bias correction is presented in Cabaj et al (2020). Â Although the current study evaluates using multiple reanalysis products, I do not think this evaluation in it's current form adds much that is new.
NESOSIM is a relatively simple conceptual model that uses simple parameterizations of blowing snow losses and densification.  At a fundamental level the model can be seen as a transfer function that corrects for biases in snowfall products.  The paper demonstrates that parameter values for blowing snow and densification are sensitive to the snowfall product.  It also demonstrates that the two parameters chosen for calibration are highly interdependent.  The first point is well known.  There are numerous examples of model sensitivity to input in the statistical and Earth science literature, including in the terrestrial snow modelling and snow hydrology literature.  The second point should not be surprising because the blowing snow parameterization removes snow (reducing the bias in snowfall input) and the densification parameterization (the rate at which low density new snow is "transformed" into high density old snow) increases/retains snow (increasing the bias in snowfall input).  I suspect a "brute-force" evaluation on the parameter space would have shown this interdependence.  The question for the modeller is how to constrain this interdependence.
I also do not think that the approach described here gets at the question of uncertainty. Â There are (at least) three sources of uncertainty in models: uncertainty related to input fields, parameter uncertainty, and model structural uncertainty. Â Although the scaling with CloudSat address some of the input bias and uncertainty, the different parameter distributions for the different reanalaysis products suggest that input uncertainty is also included in the assessment of parameter uncertainty. Â I would suggest that a better understanding of input and parameter uncertainty would be gained by calibrating the model on measured snowfall from MOSAiC or the drifting stations, and then using these parameter estimates in a bias correction step. Â The model (parameterizations and bias correction) should then be tested against data that has not been used for calibration. Â This is an important step in any modelling study but has not been included in the present study.
Another issue with the paper is the section on trend analysis. Â I don't think this section adds anything to the paper. Â Moreover, it is not always clear from the discussion when trends have passed tests for statistical significance. Â An example of this is:Â
> Although the trend magnitudes and seasonal cycles for snowfall vary by region, most of the trends for most products are not statistically significant at a 95% confidence interval, likely due to high interannual variability of snowfall.
In trend analysis, the most common purpose of a significance test is to decide whether or not the null hyposthesis, that the trend is zero (\alpha = 0), can be rejected.  If it cannot be rejected (i.e. it is "insignificant"), the null hypothesis has to be accepted; no trend distinguishable from zero can be detected.  Put another way, there is no trend, it is zero and it cannot be positive or negative value.  So in the example above, trends for the regionds cannot be different if they are not statistically significant.
If the authors can demonstrate that this section is relevant to the paper, I would suggest only discussing statistically significant trends in the text and only showing statistically significant trends in the figures.  Note, the grey hatching is not visible in the pdf version of the paper.  I suggest not showing regions that do not pass the significance test.  Also the overlapping shaded regions in the line plots obscure the information.  As the plots show trends for each month, the statistically significant trends for each product should be shown as symbols with bars to show the 95% confidence intervals, grouped by month, because they are individual data points.
The reader is left to do a lot of work to understand what was done and how the NESOSIM model works.  The description of the scaling using CloudSat is very brief (Lines 211 to 215).  Cabaj et al (2020) describes the scaling but I cannot find how it is applied to the four Arctic quadrants in Cabaj et al (2023).  In my opinion, papers should contain sufficient information to allow a reader to understand what was done.  I would recommend including a brief description of how CloudSat scaling is applied and how blowing snow and wind packing parameters are applied.
The authors should use the correct citations for datasets and I would encourage them to cite DOIs. Â For example. Â The correct citation and title for the NOAA/NSIDC sea ice concentration is:Â
Meier, W. N., F. Fetterer, A. K. Windnagel, and S. Stewart. 2021. NOAA/NSIDC Climate Data Record of Passive Microwave Sea Ice Concentration, Version 4. [Indicate subset used]. Boulder, Colorado USA. NSIDC: National Snow and Ice Data Center https://doi.org/10.7265/efmz-2t65. [Date Accessed].
This allows the data to be found easily and quickly instead of digging through the paper describing the dataset. Â They should should check that the data citations are correct for the other datasets they use.Â
It is not clear to me why the CRREL Buoy snow depths are treated as monthly climatologies but the OIB snow depths are daily.  Why not use daily data from the buoys.
## Specific Comments
Line 227: Suggest "described" instead of discussed.
Line 228: I have no idea what "interpolated from the corners" means here. Â Corners of the quadrants or of what?Â
Line 242: Which regions and months?
Line 243: I am trying to understand why the inter-reanalysis spread is not reduced by scaling with CloudSat.  The discussion indicates that over the central Arctic, the inflation of JRA-55 may be because CloudSat does not extend above 82N.  This suggests to me that an alternative scaling strategy for this region should be explored.
Line 250: How well does the CloudSat snowfall rate algorithm perform over sea ice?Â
Line 267: It is not clear if the first iterations refer to the burn-in period or to the first iterations after the burn in period.  Please clarify in the text.
Figure 3: It is difficult to see the individual marginal distributions. Â I would suggest using lines rather than bars.Â
Line 277: I suggest avoiding the "a(b) for c(d)" pattern and write this out in full. Â It is much easier to read. Â E.g. "Coefficients of variation for the wind packing parameters are 15% for ERA5, 42% for JRA-55... Â Coefficients of variation for blowing snow parameters are 13% for ERA5, 38%Â for JRA-55..." Â Or better yet, use a table.
Line 325: Aren't spread and uncertainty the same thing?
Line 411: Suggest "following" instead of "consistent with".
Citation: https://doi.org/10.5194/egusphere-2024-2562-RC3
Data sets
NESOSIM-MCMC Multi-Reanalysis-Average Product With Uncertainty Estimates Alex Cabaj, Alek A. Petty, and Paul J. Kushner https://zenodo.org/records/13307801
Model code and software
NESOSIM with MCMC calibration Alex Cabaj and Alek A. Petty https://zenodo.org/records/7644948
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
172 | 51 | 60 | 283 | 4 | 4 |
- HTML: 172
- PDF: 51
- XML: 60
- Total: 283
- BibTeX: 4
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1