the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Combining observational data and numerical models to obtain a seamless high temporal resolution seasonal cycle of snow and ice mass balance at the MOSAiC Central Observatory
Abstract. Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) observations span an entire annual cycle of Arctic snow and sea ice cover. However, the measurements of atmospheric and ocean forcing, as well as distributed measurements of snow and ice properties were occasionally interrupted for logistical reasons. The most prolonged interruption happened during onset of the summer melt season. Here we introduce and apply a novel data-model fusion system that can assimilate relevant observational data in a collection of modeling tools (SnowModel-LG and HIGHTSI) to provide continuous high temporal resolution (3-hourly) time series of snow and sea ice parameters over the entire annual cycle. We used this system to analyze differences between the three main ice types found in the MOSAiC Central Observatory: relatively deformed second year ice, second year ice with extensive smooth refrozen melt pond surfaces, and first year ice. Since SnowModel-LG and HIGHTSI were used in a 1-D configuration, we used a sea ice dynamics term D to parameterize the redistribution of snow to newly created ridges and leads. D correlated highly with the sea ice deformation (R2=59 %, N=33) in the vicinity of the observatory and was at times as high as 10 % of all winter snowfall. In addition, we show, in separate simulations for level ice, that snow bedforms with thin snow in the bedform troughs largely control the ice growth. Here, mean snow depth minus one standard deviation was required to simulate realistic sea ice thickness using HIGHTSI; we surmise that this is accounting for the control of relatively thin snow on local ice growth. Despite different initial sea ice thickness and freeze-up dates, sea ice thickness of level ice across all ice types became similar by early winter. Our simulations suggest that the mean (spatially distributed) MOSAiC snow melt onset began in late May, but was interrupted by a snowfall event and was delayed by 3 weeks until mid June. The level ice started to melt in the last week of June. Depending on the sea ice topography, the ice was snow-free by late June and early July.
- Preprint
(7193 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 16 Jan 2025)
-
RC1: 'Comment on egusphere-2024-3402', Anonymous Referee #1, 07 Jan 2025
reply
This article presents an application of the use of snow and sea ice models (SnowModel-LG and HIGHTSI) with assimilated observations from the MOSAiC campaign to produce a continuous time series of snow and sea ice data at the location of the MOSAiC Central Observatory. As the article discusses, although MOSAiC has many high-quality observations, the campaign nevertheless occasionally experienced unavoidable data-collection interruptions. In this work, SnowModel-LG, a model used to produce snow on sea ice, is run in a 1D configuration and is used to provide input to HIGHTSI, a 1D thermodynamic sea ice model. The result is a 3-hourly time series of simulated snow and sea ice properties which helps fill in observational gaps during the MOSAiC campaign. The residual term in the SnowModel-LG budget, D, is found to correlate well with sea ice deformation.
In my view, this work is of interest to the scientific community, and I believe that the methodology of this study is sound. MOSAiC has a suite of measurements which are very well-suited to be used as assimilation for a model in a 1D configuration. SnowModel-LG is a widely-used snow-on-sea-ice model with detailed representations of snow processes, and the use of HIGHTSI enables the modelling of sea ice in conjunction with snow. I find it encouraging that even after interrupted observations, SnowModel-LG and HIGHTSI show high fidelity in representing snow and sea ice conditions during MOSAiC once observational corrections are applied. This study also provides some very scientifically relevant insights relating to fine-scale climate-relevant processes, and how climate-model representations of such processes could be improved. The manuscript is well-structured and generally clearly written, and the scientific conclusions follow clearly from the results. The one outstanding point for me is the availability of the data, which has not been provided with the preprint, though I recognize that the authors have stated that it will be available following publication. There are also just some minor points where I think some additional clarification would be beneficial, which I list with my comments below.
General comments:
- The snow density observations are linearly fitted before being used as input to SnowModel-LG. Although I understand the necessity of such a fit (given the high spatial variability relative to the seasonal evolution, as discussed in this work) and I agree with the use of it here, I would appreciate seeing some discussion of the possible biases which may be introduced from using this approach.
- Data availability: This section is currently incomplete, please include specific references for all datasets used. Also, is the model source code publicly available?
Specific comments:
- Figure 5: I appreciate the authors wanting to convey all possible information and very much understand the difficulties in presenting a variety of information in a single plot, but Figure 5 is somewhat difficult to read for me. Some of the lines obstruct the points in such a way that it is difficult to see the points themselves, particularly if they’re overlapped by a dotted/dashed line while also containing a white dot. I suggest possibly removing the grid (or moving it to the background behind the points so that it doesn’t obstruct them). Possibly also the fit lines could be rendered as solid lines and moved behind the data points? Since the bulk densities calculated from the individual cutter measurements are shown, perhaps the individual cutter values could be left out and shown in a supplement instead. Regardless of other possible changes suggested here, I do strongly suggest extending the plot vertically or otherwise adjusting the plot so that the legends do not overlap any data points. Otherwise, I will ultimately leave the choice of what to do here to the judgement of the authors.
- Figure 7: I appreciate seeing the time series and scatter plots here, but regarding the scatter plots, it’s not surprising to me that the assimilated values correlate well with the model. However, I am curious about the performance for the melt season (non-assimilated) values. Could you provide correlations for the melt period values alone? (Or would n be too small for this to be meaningful?)
- Figure 7b): Snow density in mid-late July appears to fluctuate rapidly; is this due to artefacts from intermittent periods of bare ice? Would appreciate a brief comment on this. Also, just to be clear, is this the corrected density modified by the density correction parameter (line 313-315)? I am curious how large the correction was, here.
- Figure 9: Since you make reference to some of the events shown in coloured shading in Fig 8 while describing this figure also, possibly consider adding the coloured shading to this figure as well. I would also appreciate more specificity in the captions and/or legend what is observed vs. what is from the model.
- Figure 10: “some observations during quiescent periods are skipped in the Nloop” could you elaborate on why was done?
- Line 170: How many measurements were included in this average?
- Line 183: When you say 243 measurements were selected, do you mean to say that you applied some selection criteria? Or is this just all the bulk snow density measurements in the sites you’re examining.
- Line 309: I know you say that the plots for other ice types are similar, but I would appreciate still seeing them, perhaps in a supplement.
- Line 340: Could you clarify how 1 standard deviation of snow depth is defined here? E.g. is this one standard deviation with respect to the average over the entire season?
- Line 432: The reasoning as to why precipitation observations have low enough errors for this to be detected is not entirely clear to me. Are you saying that this follows from the fact that there is a strong correlation between the derivatives of D and total sea ice deformation?
- Line 577: If around 50-60% of D can be explained by deformation, could you comment on what could be attributed to what remains of this residual? In particular, do you expect it to be attributable just to error or are there additional processes not being considered?
Technical corrections:
- Reference section: Several of the DOIs in the references appear to have formatting errors (e.g. the doi.org URL is repeated twice), which occasionally also breaks the hyperlinks in the article.
- Line 35-36: This sentence is confusing to me as it’s currently phrased, did you mean to say that the drift of the expedition is shown in Fig 1?
- Line 140: survided -> survived
- Line 185-186: either should be “SMP measurement sites.” or “SMP measurements.”
- Line 377: SnowModel-LG misspelled
- Line 416: “affect” should be “effect”
- Line 461: phenomena -> phenomenon
Citation: https://doi.org/10.5194/egusphere-2024-3402-RC1 -
AC1: 'Reply on RC1', Polona Itkin, 09 Jan 2025
reply
Dear referee,
Thank you very much for your review, comments and suggestions. All your comments will be addressed, references improved and simulation data and code provided/published accordingly in the revision.
Kind regards,
Polona
Citation: https://doi.org/10.5194/egusphere-2024-3402-AC1
-
RC2: 'Comment on egusphere-2024-3402', Anonymous Referee #2, 10 Jan 2025
reply
In this paper, the authors combine the MOSAiC observational time series with two one-dimensional models, one for the snow and one for the ice, to bridge gaps in the time series and to complete one full annual cycle. The produced data set of SWE, snow density, snow and ice thickness can be interesting for other applications and from the analysis some (first) conclusions can be drawn about the impact of snow deformation on the snow depth evolution and on how snow depth heterogeneity influences heat transfer through the ice.
While the paper is clearly written in the sense that you can easily read and follow the sentences, I had often difficulties to find out what EXACTLY has been done and why. This is reflected in the numerous comments you will find below, which also might suggest that a final consistency check/internal review would have been useful and should not be task of the reviewers...
My main concerns are listed below, while individual comments follow after that:
1. The authors stress how important the initial conditions (for freeze up/snow accumulation start date and ice thickness) are for the simulations but the freeze up date is determined inconsistently. For the first ice type the 10m (why not 2m?) 3 hourly air temperature is used, while for the second ice type the 3-day running mean is used. However, this is not stated or discussed anywhere, I inferred it from the figure (Fig. 4). Where the assumed initial ice thicknesses, especially for the second ice type (10cm), comes from is not clear. This combined with the uncertainty from using reanalysis precipitation multiplied by 2.13 (?, see 3. point below) for the first 2.5 months of the simulations should be more appreciated as a bigger uncertainty of the produced time series.
2. A linear fit to the observed snow density data is assumed, which I am not convinced of. The implications and resulting uncertainties of this assumption should at least be discussed more extensively.
3. To drive the snow model with atmospheric data especially in the initial and final phase of the simulations, the authors use MERRA-2 reanalysis data. The precipitation from MERRA-2 is multiplied by 2.13 to do so and the authors claim that this is the result from comparing in situ precipitation observations with the reanalysis data were available, but the comparison is not presented, the high value of 2.13 is not discussed anywhere, and the precipitation in the time frames when only the reanalysis data is used is considerably higher than during the observation times (see Fig. 6c and my comments for Sect. 3.7 and Tab.1)! This raises some questions which are not addressed at all.
4. I suspect that something has been mixed up in Sect. 5.1 and 5.2. My guess would be that 5.2 should be given before 5.1, otherwise I would have even more difficulties to understand what has been done here (see the specific comments below). This caused lots of confusion. Because my comments would be different if or if not this is the case, I prefer this to be checked first. Anyways, I would prefer to see what the results for the simulations are with and also without the assimilations. If I understand it correctly, this is only shown for SWE in Fig. 8, and then the accordingly assimilated SWE is shown in Fig. 7a (?). I assume that in Fig. 7b and c only the assimilated snow density and snow depth are shown? It may be useful to also discuss what happens in the 'model world' when the model is dragged to match the observations (maybe strange things happen when the model is forced to use values that are not consistent within the model physics?)
5. Different numbers of observations are used at different locations of the paper, and I cannot always follow where these come from.
6. The authors claim that their analysis confirms the good quality of the precipitation data because the difference between the simulated and observed snow evolution is (relatively) small and can partly be explained by deformation. I would suggest to state more clearly that this assumption (if I understood this correctly) is mainly drawn because a correlation between the difference between simulated and observed snow evolution/SWE (which is hypothesized to be attributed to deformation) and total deformation obtained from buoy position measurements is found (r2=0.58). This chain of arguments should be stated more clearly (if this is what the authors meant).
7. The authors should differentiate more on what are outcomes from the MOSAiC observational timeseries (published already earlier) and the NEW insights due to combining the observations with models. I suspect that parts of the conclusions contain information related to the first and not the latter one (see specific comments below).
Specific comments:
l. 10: 'D ... and was at times as high as 10% of all winter snowfall' -> When only reading the abstract, this sentence is hard to understand correctly. Maybe something like 'deformation appears to contribute/explain 10% of ...'.
l. 28: roles of snow... has
l. 45: I suggest: ... collected once a week, the following caused interruptions of up to 2.5 months: list of things -> would be much easier to read. What do you mean with 'early fall MOSAiC ship arrival' and 'summer ship departure'? Sounds like the starting and ending point of the time series? Do you mean that there is a gap because the time series does not cover the full 12 months?
l. 72: 'This implies that no snow had accumulated on it during summer.' -> Do you mean: we do not know whether snow has accumulated on the ice DURING summer but at the end of summer all precipitated snow had been transformed to ice/frozen melt ponds?
l. 84: that -> which (also elsewhere)? and 'are not generally' -> 'are generally not'?
l. 139: '...was 0.5 m thick on 1 August': Ok, you argue that October ice thickness is a good estimate for end-of-summer ice thickness, does this
mean that 0.5m ice thickness was the mean/modal/? ice thickness as measured on this ice type in October? (I had to speculate as this is not explicitly stated here)
Fig. 3, legend: 'Runwy'
l. 140: survided
l. 143: Why did you use 10 m air temperature from reanalysis? (and not 2m as it would be available for reanalysis data) I would state already
here what reanalysis you used (and not only refer to section 3.7)
Caption of Fig. 4:
1. the caption should include what temperature time series is shown (biased corrected MERRA2 reanalysis)
2. sea ice water -> sea water?
3. 'The dates when air temperature AND its running mean depart continuously from the freezing temperature...' -> inconsistent/confusing choice of the two dates (blue and purple vertical bars): if you choose to set the dates according to the running mean, the second date agrees with this definition but the first date is not detectable from the shown time series (running mean is below the threshold for the whole tine series shown here). However, when making the choice based on the air temperature series (instead of the running mean), the first date agrees with this definition, but the second date would be shifted to a later day (and from the figure it would not be obvious whether it should be even later or not). Or if you meant to refer to this difference by using 'respectively' in the end, this is not clear and not distinguishable for the reader and would in addition require further explanation why these different definitions are used.
l. 146/150: Where do these assumptions (0.1 m/0.05m thickness) come from, especially the 10cm assumption?
l. 147: bellow
Section 3.2 + 3.3: Maybe it would be better to combine these sections as these paragraphs are a bit confusing because they jump back and forth from the different measuring methods (transects, drilling sites, stake sites) without clear transitions.
l. 156: 'no Magnaprobe snow depth measurements were used after mid-July' -> more precisely (I guess): after mid-July 2020 (i.e. the final phase of MOSAiC)
Section 3.4, l. 174-192.: the jumping back and forth between the different methods to determine snow density makes this section harder to
read 1. you mention all three methods (ok), then you write something about the cutters, then about the SMP, then about the SWE cylinders,
then again the cutters, the SMP, the SWE (I suggest to go through this cycle once, this also avoids unnecessary repetitions).
l. 183: why 'still'?
l. 219: ' In addition, the final sampling on Snow 1 provided deeper and denser snow values.' -> meaning what?
Fig. 5:
1. you give N=591 but maybe you could add in the caption the numbers for the three different methods such that the reader does not have to search them from the text.
2. fit line for cylinder measurements: in spring 2020 there seem to be enough measurements (marked by crosses) but all of them are far below the fit line. It is hard to judge from this figure (with all the others points for the other two measurement methods) whether the fit is
mathematically correct (which it probably is) but it also shows very clearly that a linear fit as calculated here is not a very feasible assumption for these data points... (which should at least be mentioned somewhere)
3. equation rho_s = 0.22 x is given without units (also in the text description)
Section 3.5: for the other sections you already show some numbers/results, here not. Are the shaded areas, e.g. in Fig. 6, determined here?
Section 3.6 + Fig. 1: Is the 'buoy' given in Fig. 1 the buoy 2019I3 mentioned in the text? For me, from the description it is not clear why there are different drift trajectories and what the different trajectories are: So, in the figure, the trajectories marked as MOSAiC
CO1/CO2 are from the ship positions? and they are almost identical to the buoy, right? Reading the text, I would assume that the back trajectory model is only used to extend the ship's or the buoy's trajectory at the beginning and the end, but the figure suggests that this is a separately derived drift trajectory for the whole time series. Wouldn't it be better to use the buoy's trajectory where available and only to add the beginning and the end? Also, the beginning of the trajectory looks strange...
Section 3.7 and Tab. 1: It would be interesting to see the MERRA-2 data compared to the observations. The authors multiplied the water equivalent precipitation from MERRA-2 with 2.13 and (as a result?) in the time sections where the reanalysis data is used, the precipitation is considerably higher than in the remaining time: increasing from 0 to 5m within less than 2.5 months and later by approx. 3m in 2.5 months (of reanalysis data), while the cumulative precipitation increased by only approx. 4m in the remaining more than 7 months (of observation data). The reader should a least have a possibility to retrace whether this is realistic and where this comes from and what uncertainties are involved.
l. 260: I suggest to write: 'In this paper, only ocean surface heat fluxes derived from SIMBA buoys by Lei et al. (2022) were used.
l. 275: SWE is calculated -> the change in SWE (with time) is calculated
l. 280: the units ... is -> the unit ... is / units ... are
l. 304: 'any and all'?
Section 5.1: The description in this section could be clearer. I am also not convinced that a linear fit to the snow density values (Fig. 5) is the best choice to describe the observed snow densities and whether this fit is something you would want to use to correct/assimilate the model. Again, I would like to see how the model and the observations (e.g. of snow density) differ when the model is run without assimilations? And if there is large differences the reasons for this should be discussed somewhere. And what happens in the model when snow density and thickness are just changed (assimilated) without physical reasons (in the model's world).
Fig. 7: Do I understand correctly that in the scatter plots the assimilated model values are compared with the observations (and thus are located (almost exactly) on the 1:1 line and only because the model is not assimilated for the melt season, there is some scatter points where the model and the observations differ (which then lead to r2 values not equal to 1.0). I am not sure how much sense these scatter plots make under these circumstances...?
I guess the circles are not only 'assimilated values' (purple) and 'melt period values' (grey) but could be marked as 'observed values' (which are used for assimilating the model (purple) or not (grey, melt season)).
l. 310: 'the final SWE evolution defined above was further modified' -> what is the 'final' evolution and were 'above' is it defined and what do you mean with 'further' modified, what is the first modification?
l. 320: Snow depths were created using eq. 2 (or 3?) and the assimilated SWE and snow density values, but they were also (on top) assimilated with observed snow depth values?
Section 5.2.: The modifications to SWE described here are they meant in l. 310?
Fig. 8: Caption does not explain all markers used in the figure. It is confusing that the SWE observations are (blue and gray) crosses (I am guessing, it is not explicitly stated in the caption) while the other crosses are the derivatives. Why are the absolute precipitation values so different for the three ice types? Are they only shown qualitatively, do they refer to the values on the y-axis?
l. 339: strange sentence: snow depth variability ... snow-depth variability?
l. 340 f.: Did you come up with the idea to use mean snow depth minus 1 std or has it been used elsewhere earlier? Is this related to the fact that the effective thermal conductivity of snow (partly due to snow depth being variable instead of one mean value) would be much higher than the thermal conductivity (point) measurements that you are using here? (edit: you make this connection much later in the paper)
However, if only 16% of the data show this or a lower value, it sounds like a quite small value to be used for the simulations? Maybe this is more like 'tuning' the ice thickness model (for Nloop and Sloop) and there are other reasons why the simulated ice is otherwise thinner than the observed one?
l. 378: than than
For me, what is written in Section 6.1 is a summary with some conclusions but not a discussion...
l. 394: 'SnowModel-LG reproduced the observed snow evolution' because it was assimilated
l. 400: 'These SSS and SBS values were about three times as large as in Liston et al. (2020)' -> or maybe this is related to the precipitation in the reanalysis being multiplied by 2.13?
l.414/5: This could be mentioned already on l. 327 to explain why you set D to zero for these time periods.
l.416 affect -> effect + I suggest to phrase this more precisely: deformation contributed to ... % of ...
l.426: between ... to ... -> between ... and ... + Why is there only n=16 data points for Nloop and Sloop and only n=2 for Runway? (in Fig. 7 there are more...)
l.432/3: 'snow redistribution ... was able to be detected' -> 'we were able to ...' or 'it was possible to...' or '...could be detected'?
l.432/3: 'Our findings confirm that the precipitation observations have sufficiently low errors that a minor signal such as snow redistribution due to sea ice deformation was able to be detected' -> With 'our findings confirm' do you mean the high correlation between the time derivative of D and the total deformation? If so, please write it like that. If not, please elaborate more on why you think the findings (which?) confirm the accuracy of the precipitation data. Because otherwise I would think that the higher the uncertainty/error in the precipitation data, the higher the fraction of snow that would have to be (erroneously) attributed to deformation in order to make the simulations and the observations match.
l. 461: 'a known phenomena' -> phenomenon
l. 480: check sentence: for example, what is 'younger ice thickness' + 'the oldest and thickness ice'
Fig. 11: Why are there only 3 data points used for Runway? Why are the ice thicknesses from the coring site and the Ridge Ranch representative enough to be included in the comparison in Fig.9 but not here? If they were included, the correlation would be smaller because these observations show clearly thinner ice than the simulations with snow depth = level ice mean minus 1 std.
l. 499: Why not write 'Our bedform parametrization (using the level ice mean snow depth minus one standard deviation),...' instead of 'Our bedform parametrization, as described in Section 5.3,...'? Easier for the reader, only a few letters more.
l. 499-505: Interesting idea, but I think a more detailed and more comprehensive analysis focusing on this aspect should be conducted before any recommendations can be given of what works better (using a different thermal conductivity value vs. a different snow thickness value (and especially which one))
l. 500: that -> than
l. 507: citation not correctly embedded
l. 509: I assume you want to refer to Section 3.1 also for the initial thicknesses.
Section 6.4: I would suggest to use consequently 'the simulated snow/ snow density/SWE/whatever' to make it clearer when you write about the simulated results in contrast to observations
l. 531 f: 'All [simulated, see comment above] snow melted by 8 July, which fits well with the transect observations ... On level ice with snow reduced by one standard deviation, [simulated] snow was fully melted even earlier - about 3 weeks prior to the average snow cover. This coincides well with the estimates from eight thermistor chains deployed on level ice (Lei et al., 2022).' -> What does this mean? According to transect observations (i.e. real people on the ice) the ice melted on 8 July, according to (eight) thermistor chains this happened already three weeks later? Are eight thermistor chains only a subset of all (how many?) thermistor chains? Is this related to spatial heterogeneity/a different ice type?
l. 544/5: What do you mean? There is more melting around thermistor chains?
l. 556: Again, I guess you mean Sect. 3.1?
l. 559-567: Are these results obtained from combining the model and the observations or are/can they (be) obtained from the observations? (and have been published before) The newly added information through your approach of combining the models with the observational time series should be made visible here.
l. 579: is the parametrization really found in 5.1?
l. 594: allow -> allow for?Citation: https://doi.org/10.5194/egusphere-2024-3402-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
183 | 48 | 8 | 239 | 3 | 1 |
- HTML: 183
- PDF: 48
- XML: 8
- Total: 239
- BibTeX: 3
- EndNote: 1
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1