the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Snow Depth Estimation on Lead-less Landfast ice using Cryo2Ice satellite observations
Abstract. Observations of snow on Arctic sea ice are vitally important for sea ice thickness estimation as well as for understanding bio-physical processes and human-activities. This study is the first assessment of the potential for near-coincident ICESat-2 and Cryosat-2 (Cryo2Ice) snow depth retrievals in a lead-less region of the Canadian Arctic Archipelago. Snow depths are retrieved using the absolute difference in surface height from a near-coincident ICESat-2 and Cryosat-2 after applying an ocean tide correction between satellite passes 77 minutes apart. Both the absolute mean snow depths and snow depth distributions retrieved from Cryo2Ice compare favourably to in-situ measurements. All four in-situ sites had snow with saline basal layers and different levels of roughness/ridging. The retrieved Cryo2Ice snow depths were underestimated by an average of 20.7 % which is slightly higher than the tidal adjustment applied. Differences in the Cryo2Ice and in-situ snow depth distributions reflected the different sampling resolutions between the sensors and the in-situ measurements, with more heavily ridged areas producing larger mean underestimation of the snow depth. Results suggest the possibility of estimating snow depth over lead-less landfast sea ice but attributing 2–3 cm biases to differences in sampling resolution, snow salinity, density, surface roughness and/or errors in altimeter’s tidal corrections require further investigation.
- Preprint
(2467 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2023-2509', Anonymous Referee #1, 14 Dec 2023
Review of “Snow Depth Estimation on Lead-less Landfast ice using Cryo2Ice satellite observations” by Saha et al.
Summary:
The study assesses the potential for near-coincident ICESat-2 and Cryosat-2 (Cryo2Ice) satellite data in estimating snow depth over landfast ice in the Canadian Arctic Archipelago. Snow depth is retrieved by calculating the absolute difference in surface height from the two satellites, considering an ocean tide correction. The study compares the retrieved snow depths from Cryo2Ice with in-situ measurements, showing good agreement in terms of mean values. However, Cryo2Ice snow depths were, on average, underestimated by 20.7%. Discrepancies are attributed to differences in sampling resolutions, snow characteristics, surface roughness, and tidal correction errors. The results suggest the potential for estimating snow depth over lead-less landfast sea ice, but further investigation is needed to understand biases related to sampling resolution, snow salinity, density, surface roughness, and altimeter correction errors.
General Comments:
This is an interesting study, which will be valuable to improve our understanding with respect to retrieving snow depth from a dual-altimeter approach. I had no problems to follow the paper, but I believe clarity can be improved. However, there are some parts in the analysis and discussion, which I think need some clarification and revision. I think this work deserves publication, but major revisions are needed.
My main concerns are:
- There is quite some focus on the tidal correction, which I agree is important. But one of the main limitations, from my point of view, is the large CS2 footprint and the noise in the CS2 height retrievals, which is not surprising as we know from previous studies. But considering the relatively small sample size, it will have a large impact. The reasons for the CS2 height uncertainties are discussed by the authors, e.g., the surface roughness, snow salinity, scattering horizons in the snow layer. And given the significant difference in footprint between IS2 and CS2, we can hardly assume that CS2 heights will represent the corresponding snow-ice interface, even if colocated. Considering the snow depth distribution from in-situ measurement sites indicates how much spread is just within one CS2 footprint. In addition, the retracker used for the CS2 height retrievals is a threshold retracker, using a fixed threshold, where we cannot be sure if it is tuned exactly for the same ice conditions we find in this area. I believe this study can contribute to characterize and quantify uncertainties and limitations of this approach, but it should be emphasized more clearly. For example, I think the comparison between the mean/median values of Cryo2Ice and in-situ measurements has only limited meaning, which brings me to my next point.
- I am not an expert in statistics, but some of the decisions made in the processing need some clarification and potential revision. I do not think it is a good idea to just drop negative snow depth values. From a physical point of view, they do not make sense, but statistically they are important, reflecting the impact of uncertainties. By removing only negative values, your snow depth retrieval is likely biased towards higher values. The negative values should be also part of all the histograms because that’s the reality when you subtract one height retrieval from another, when both come with uncertainties. Moreover, some steps and figures need to be explained in more detail, see therefore the specific comments below.
Specific Comments:
L101: Figure 3 is introduced before Figure 2.
L113: For the snow depth measurements, what was your sampling strategy? Can you be more specific here? Did you walk straight transects? Did you ensure representative sampling, considering the fraction of deformed sea ice?
L157: The MSS is not mentioned under 2.6. I suggest to briefly explain the reason here.
L164: To my knowledge, the ATL07 product does not contain the individual photon heights, but segments of different length that aggregate the photon heights from the ATL03 product. I assume you have used these segments?
L167: Are the retrieved Cryo2Ice snow depths not arranged along a straight line? Why then investigating spatial autocorrelation? Isn’t it nearly 1D? Moreover, when I look at Fig. 8 (bottom plot), I find it hard to imagine how this works. The sample size is not very high and there is a lot of noise. And the spacing between point is already 300 m. Can you show a variogram? (Just in the response, does not need to go into the manuscript).
L196: That’s a nice approach with the Sentinel-1 backscatter. I suggest checking the “stability” and representativeness of the ICESat-2 heights, making use of the other beams. Just compare the height distributions from the 3 strong beams for the area of interest.
Figure 4: I suggest changing the legend. It is misleading. It looks like the backscatter of IS2/CS2 is shown here…
L206: Is this related to Figure 11? May be show this together with Figure 4? Farrell et al. (2020) primarily use ATL03. The ATL07 segments can be quite long. How many segments do you get on average within the 300 m segments? Can you derive a meaningful roughness from this?
L236: Figure 7 -> Figure 6?
Figure 5: The blue line is not explained.
L251: I don’t see the negative values in Figure 8. I suggest adding a class with a specific colour for values <0. From Fig. 7, it does not look like negative values primarily occur close to the coasts.
L252: I would argue that with removing negative values, you introduce a positive bias in the snow depth retrieval. It will only make sense if you assume that underlying uncertainties affect the snow depth exclusively in one direction. But looking at Figure 7, it just seems that there is significant noise on the CS2 heights, which goes in both directions (positive and negative).
L295: I haven’t fully understood why this test is done. “The test results show significant difference between in-situ sites which was also evident in the corresponding Cryo2Ice snow depths.” Which are the corresponding Cryo2Ice snow depths? I guess there are just a handful in the vicinity of each site?
L299: Related to the previous question: How many Cryo2Ice snow depths are you using for the comparison?
Figure 10: I suggest showing the “raw” distributions, not the density functions. Again, how many Cryo2Ice samples have been used at each site for the PDFs?
Figure 7: It would be also interesting to see the IS2 heights from the co-registration, averaged on the 300 m segments. Perhaps you can add them here?
Figure 8: I suggest adding the mean and standard variation from the in-situ measurements at the 4 sites.
L363: R**2 = 0.04 basically means no correlation I believe. But considering the noise level, especially from the CS2 heights, and the relatively low sample size, I wouldn’t expect a higher R here.
Citation: https://doi.org/10.5194/egusphere-2023-2509-RC1 -
AC1: 'Reply on RC1', Monojit Saha, 10 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2509/egusphere-2023-2509-AC1-supplement.pdf
-
RC2: 'Comment on egusphere-2023-2509', Anonymous Referee #2, 16 Dec 2023
This study presents a first estimation of snow depth from near-coincident spaceborne dual-frequency (laser and radar) altimetry over lead-less landfast ice in the Canadian Arctic Archipelago. The authors utilize one CRYO2ICE track (separated by 77 minutes and approximately 1.5 km), and estimates snow depth from ellipsoidal elevations after applying a tidal correction to the elevations. This tidal correction allows for derivation of snow depth from the difference in elevations assuming laser (ICESat-2/IS2) is reflected at the top of the snowpack/air-snow interface, and radar (CryoSat-2/CS2) is reflected at the snow-ice interface over lead-less sea ice. However due to the non-presence of leads, the impact of tides is accounted for through comparing the changes to water level caused by tides using the models from the satellite data with tide gauge estimates. The derived CRYO2ICE snow depth estimates (with negative snow depths removed) are compared with a dedicated ground-based campaign along the orbit.
An interesting paper that provides some interesting results to the dual-freq. snow depth approach – and, with the limited reference data available to compare with, especially along CRYO2ICE orbits, this paper can provide some interesting insights along this one orbit. The inclusion of much needed in situ reference data along such an orbit feed into some good discussion topics and the paper warrants publication. The paper read easily, although I have a few comments regarding the figures and tables, which I hope the authors will consider. Furthermore, I still have a few major points to be addressed before publication, as I believe there are more work required to ensure that CS2 and IS2 are comparable at the resolution used in the study, that the tide correction is applicable beyond just this example (and if not, that this is further discussed), and whether some of the assumptions of this study holds up.
Major comments
Tide correction and discussion relating this to different ice regimes/areas
I’ve yet to encounter this methodology before, but I am intrigued by it, and wonder how this may be applicable on a larger scale. However, I am not fully convinced by the correction applied as of now, was somewhat confused by it, and hope the authors could provide some more information regarding the ocean tides used in the study. You state that there is an average difference along track of 7.9 cm (on the co-registered points or? – and if so, how are the IS2 ocean tide of co-registered points even computed?) but the difference in water level from CHS was 6 cm. This is only compared to one point (tide gauge) vs. full along-track data. Could you provide a figure (maybe just in response to reviews, but potentially also in the manuscript) of the difference in ocean tide models along-track (maybe not using the average value, but maybe maximum and minimum too/or show distributions). This would also feed into your statement about the 1.9 cm correction (when compared with CHS water level) representing the systematic bias – this seems somewhat low, when we see that the ocean models may differ by +/- 12 cm (ranges +/- 50 cm vs +/- 62 cm) along the track. I think the study would greatly benefit from some more results and discussions on this.
In addition, I appreciate that the study investigates only the snow cover on landfast ice along the one track where you acquired ground observations. However, this methodology of applying a tide-correction is interesting for coastal and landfast sea ice beyond just the Canadian Archipelago. The manuscript would benefit from providing a greater perspective on applying this on a larger scale (across the Arctic, discussing whether ellipsoidal height differencing is better/equal to freeboard differencing etc.). If possible, would be interesting to see if it was possible in e.g., the Laptev Sea or similar – and also, in areas with no tide-gauges to compare with. Or at least a discussion about this would be very interesting.
Similarly, I believe that this work on landfast ice is important, but would have expected a discussion relating it to the bigger picture and sea ice in general (landfast and drifting) – could you provide some insights/relate it to e.g., deriving this in drifting sea ice in the Arctic/Antarctic? I acknowledge the very different sea ice/atmospheric patterns here, but I believe that this warrants some more discussion or at least, additional acknowledgements of the limitations/difficulties/differences when deriving this over landfast ice vs. drifting sea ice, beyond just the limitations regarding leads only.
Spatial scales, C2I snow depths and smoothing IS2 elevations
I believe there is more work to be done to ensure that differences in footprint and resolutions have been thoroughly considered. The fact that we are observing at much different scales (300 m x 1600 m vs. ATL07’s varying resolution), simply smoothing IS2 to 300 m does not seem sufficient. Also considering the average distance of IS2 and CS2 being 1.5 km, I believe there needs to be done more work on the IS2 data to ensure that they have processed to be “comparable”. I suggest having a look at the data from the other strong beams, to investigate the variability along the IS2 tracks and from here, you may actually also see if they are seeing “similar surfaces/distributions” which you’ve otherwise support using the Sentinel-1 data. In addition, providing some statistics on segment lengths from IS2 would be great to fully understand the coverage of IS2 when smoothing and co-registering to CS2.
In addition, I would have liked to see the semi-variogram (or just variogram) that you have created showing this spatial autocorrelation of 1 km of the in-situ snow depth distribution. That somewhat counters the assumption that IS2 and CS2 are seeing similar snow – or at least similar snow variability (as they are separated by 1.5 km). The assumption that smoothing IS2 to CS2’s along-track resolution of 300 m is sufficient is not well supported, as studies on along-track radar altimetry data usually apply some level of smoothing to the along-track radar data due to the noise (which is evident in your figure too). The CS2 data is – to a large extent – impacted by noise, off-nadir reflections (of ridges, leads and other) etc., which you see in Figure 7. This might also explain your “lack” of correlation with IS2 roughness. Ideally, we would expect less difference in the CS2 ellipsoidal heights along the orbit than in the laser if there is full penetration to the snow-ice interface, but that is not the case. A value of R2=0.04 basically states that there is no correlation. However, I also do not think – based on the ellipsoidal heights already shown – that we would expect this due to the noisy behavior of CS2. Another comparison you could use to identify what is primarily contributing to the variability in your snow depth estimates, is computing the correlation between IS2 ellipsoidal heights/C2I snow depth and CS2 ellipsoidal heights/C2I snow depth (or another measure of the CS2/IS2 elevations that are of smaller magnitudes than ellipsoidal heights along your track).
I do believe there is room for more speculation/discussion related to the C2I snow depths and how they compare with the in-situ data – just comparing mean/median seems … insufficient or at least, as if there might be more to discover, especially since the distributions are different even if the average values are similar. But I’ll leave that up to you to see if you think there is more to extract from these plots and/or analysis.
Minor comments
Figures:
- Tables in figures should be removed (following TC policy) – either make the tables into individual tables, or simply include the information in the legend/on the figure (since it is primarily average values and such).
- In addition, have a look at the resolution of the figures (e.g., Figure 11) and consider improving the DPI for better visualization.
- Consider using the same color for the line representing locations of different sites across figures – it will make comparison between figures (e.g., Figure 7, 8 and 11) easier. Also, double-check that your figures work for readers with color deficiencies.
Tables:
- In general, there are quite a few tables that provide limited insights which could just as easily have been mentioned in the figures and would likely provide some more connection having it next to e.g., the distribution figures. I encourage you to reconsider the number of tables you have, and whether the content of the tables could be put in the figures instead.
Data policy/statement:
“Available upon request” does not follow TC guidelines. Please provide a DOI for where to obtain the data – either as raw data, or as the processed data that you are presented in the manuscript. In addition, I am not able to open the link for the CryoSat-2 data – in addition, I have not encountered this website before (you did not use the science server or FTP site?). Please have a look at this link again to ensure that the link is active.
Specific comments
Line 71-73. Do you include low-confidence data (that is, ATL03 flag of low confidence or another ATL07 flag) across the entire track, or only close to the coast – or what is meant by low confidence? Could you provide a measure of the amount of low-confidence data or was it flagged somehow? And where this low confidence is along the track?
Line 74. Could you provide the distance to the other strong beams too?
Line 88. Provide which threshold is used by the re-tracker. Also, I thought it was known as the “CPOM” re-tracker? Maybe I’m wrong.
Line 105-110. Include abbreviations (Site 1 – S1; Site 1 Ridged; S1R etc.), as they are used later on in figures and text, but not typed out in the text.
Line 135. Is Kwok et al. (2020) deriving it from total freeboard and sea ice freeboard? I believe it is the radar freeboard of CryoSat-2.
Line 140. Ensure consistency with naming of h(IS2)/hIS2 and h(CS2)/hCS2 throughout text.
Line 157. MSS is not mentioned in section 2.6. Please do.
Line 167. Do you have a reference for the Moran’s I test?
Figure 3. You mention co-registration of heights. What is meant here – how are you specifically co-registering the observations? Also, ensure consistency in the naming of h(CS2)/hi(CS2).
Figure 4. I think that it is an interesting discussion and I like the idea of identifying comparable surfaces using Sentinel-1 backscatter. But I don’t think the full picture of what is shown in the figure is being discussed in the text. You state that mean values are similar, therefore the assumption is that the same surfaces are observed – even when the distributions differ with a bi-modal distribution of IS2 and higher amount of high backscatter (between -16 and -14dB) observed than for CS2. Instead of doing a distribution-to-distribution comparison, could you instead do it per smoothed, co-registered point and look at residuals of backscatter between the points? To see how much they vary and at which locations they do differ. In addition, the standard deviation lines do not make sense to me – why are they not separated by the same distance around the mean values?
Line 207. Consider re-phrasing this, as it reads as if Farrell et al. (2020) computed surface roughness from standard deviation of 300-m segments, where it in fact was 25-km segments.
Line 213-214. I don’t believe that Mallet et al. (2020) demonstrated that the use of fixed snow densities introduced significant biases in the snow depth retrieval, but rather significant biases in the sea ice thickness estimates. You are stating yourself, that the difference in snow density (to compute the refractive index) didn’t make much of an impact on the snow depth (Line 219-220).
Figure 5. The text is quite small on the figure. I do wonder about distance sampled vs. number of samples at Site 2 (and for the others as well), since it was stated that you sampled every 5 m? Why is there such a difference in sample distance? Also, remove the table from figure and either incorporate into figure or include as separate table. Also, provide some comments/thoughts about the fact that you don’t have an equal number of samples for each location.
Figure 6. Consider changing the set-up of the subplots, so it is 1 row and 2 columns, as it will allow you to compare the salinity and density as a function on depth more easily. Also, remove table (incorporate in to figure perhaps?).
Line 250. I do not believe negative snow depths should be removed – albeit not physically possible, they show the variation between IS2 and CS2 and provide insights in the differences between IS2 and CS2 too. In addition, it biases your statistics higher (which is already somewhat of an issue, since most of your C2I on average are smaller than the in-situ depths), so you should actually be observing an even bigger difference. Interesting that you do not see thicker snow than ~50 cm... I think you need a figure of the actual co-registered ATL07 smoothed vs CS2 along-track data, to truly see what is going on (as Figure 7 seems to show ATL07 in native resolution).
Figure 7. It is unclear in the legend, whether you’re looking at ATL07 heights (in their native resolution) or your 300-m smoothed ATL07 data. Also, consider making the colors comparable across figures (sites have different colors to the vertical lines across figures). In addition, why are we seeing 5-km averages – they are not used nor mentioned in the text? However, here you also see the variability of CS2 being more evident due to the majority of thin ice observations in IS2 data when smoothing at 5-km and CS2’s noise.
Line 264. What is meant by delineated with roughness? Please provide some more insights into this statement and reason for it.
Figure 8. Consider putting the latitude/snow depth plot on the y-axis of the image, so you can more easily compare it… Also, remember to include subplot numbers. In addition, perhaps I missed it, but how many C2I observations are used in this figure (and overall)? Somehow, these two plots make it look as if there are different number of observations shown.
Figure 10. Consider providing the specific information (bulk snow density) used to derive the snow depth of each site on the plot, for the reader to be reminded that for the site-specific calculations, different densities were used. Consider also providing information about the spread or similar in figure (or in other words, include Table 2 in Figure 10 as text).
Section 4.3. I really appreciate this section and discussion, very nice!
Line 330. “Radar heights can potentially be impacted by snow properties” – I would say that they certainly are!
Table 1. Why are there two values at time in the mean snow depth column? Not fully clear.
Line 356. “Found bias of 2-5 cm, we can expect 15-40% systematic biases”… Maybe I missed it, but this seemed to be skipped relatively quickly. Could you provide more insights here? Also, what about the contribution of random uncertainties?
Citation: https://doi.org/10.5194/egusphere-2023-2509-RC2 -
AC2: 'Reply on RC2', Monojit Saha, 10 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2509/egusphere-2023-2509-AC2-supplement.pdf
Status: closed
-
RC1: 'Comment on egusphere-2023-2509', Anonymous Referee #1, 14 Dec 2023
Review of “Snow Depth Estimation on Lead-less Landfast ice using Cryo2Ice satellite observations” by Saha et al.
Summary:
The study assesses the potential for near-coincident ICESat-2 and Cryosat-2 (Cryo2Ice) satellite data in estimating snow depth over landfast ice in the Canadian Arctic Archipelago. Snow depth is retrieved by calculating the absolute difference in surface height from the two satellites, considering an ocean tide correction. The study compares the retrieved snow depths from Cryo2Ice with in-situ measurements, showing good agreement in terms of mean values. However, Cryo2Ice snow depths were, on average, underestimated by 20.7%. Discrepancies are attributed to differences in sampling resolutions, snow characteristics, surface roughness, and tidal correction errors. The results suggest the potential for estimating snow depth over lead-less landfast sea ice, but further investigation is needed to understand biases related to sampling resolution, snow salinity, density, surface roughness, and altimeter correction errors.
General Comments:
This is an interesting study, which will be valuable to improve our understanding with respect to retrieving snow depth from a dual-altimeter approach. I had no problems to follow the paper, but I believe clarity can be improved. However, there are some parts in the analysis and discussion, which I think need some clarification and revision. I think this work deserves publication, but major revisions are needed.
My main concerns are:
- There is quite some focus on the tidal correction, which I agree is important. But one of the main limitations, from my point of view, is the large CS2 footprint and the noise in the CS2 height retrievals, which is not surprising as we know from previous studies. But considering the relatively small sample size, it will have a large impact. The reasons for the CS2 height uncertainties are discussed by the authors, e.g., the surface roughness, snow salinity, scattering horizons in the snow layer. And given the significant difference in footprint between IS2 and CS2, we can hardly assume that CS2 heights will represent the corresponding snow-ice interface, even if colocated. Considering the snow depth distribution from in-situ measurement sites indicates how much spread is just within one CS2 footprint. In addition, the retracker used for the CS2 height retrievals is a threshold retracker, using a fixed threshold, where we cannot be sure if it is tuned exactly for the same ice conditions we find in this area. I believe this study can contribute to characterize and quantify uncertainties and limitations of this approach, but it should be emphasized more clearly. For example, I think the comparison between the mean/median values of Cryo2Ice and in-situ measurements has only limited meaning, which brings me to my next point.
- I am not an expert in statistics, but some of the decisions made in the processing need some clarification and potential revision. I do not think it is a good idea to just drop negative snow depth values. From a physical point of view, they do not make sense, but statistically they are important, reflecting the impact of uncertainties. By removing only negative values, your snow depth retrieval is likely biased towards higher values. The negative values should be also part of all the histograms because that’s the reality when you subtract one height retrieval from another, when both come with uncertainties. Moreover, some steps and figures need to be explained in more detail, see therefore the specific comments below.
Specific Comments:
L101: Figure 3 is introduced before Figure 2.
L113: For the snow depth measurements, what was your sampling strategy? Can you be more specific here? Did you walk straight transects? Did you ensure representative sampling, considering the fraction of deformed sea ice?
L157: The MSS is not mentioned under 2.6. I suggest to briefly explain the reason here.
L164: To my knowledge, the ATL07 product does not contain the individual photon heights, but segments of different length that aggregate the photon heights from the ATL03 product. I assume you have used these segments?
L167: Are the retrieved Cryo2Ice snow depths not arranged along a straight line? Why then investigating spatial autocorrelation? Isn’t it nearly 1D? Moreover, when I look at Fig. 8 (bottom plot), I find it hard to imagine how this works. The sample size is not very high and there is a lot of noise. And the spacing between point is already 300 m. Can you show a variogram? (Just in the response, does not need to go into the manuscript).
L196: That’s a nice approach with the Sentinel-1 backscatter. I suggest checking the “stability” and representativeness of the ICESat-2 heights, making use of the other beams. Just compare the height distributions from the 3 strong beams for the area of interest.
Figure 4: I suggest changing the legend. It is misleading. It looks like the backscatter of IS2/CS2 is shown here…
L206: Is this related to Figure 11? May be show this together with Figure 4? Farrell et al. (2020) primarily use ATL03. The ATL07 segments can be quite long. How many segments do you get on average within the 300 m segments? Can you derive a meaningful roughness from this?
L236: Figure 7 -> Figure 6?
Figure 5: The blue line is not explained.
L251: I don’t see the negative values in Figure 8. I suggest adding a class with a specific colour for values <0. From Fig. 7, it does not look like negative values primarily occur close to the coasts.
L252: I would argue that with removing negative values, you introduce a positive bias in the snow depth retrieval. It will only make sense if you assume that underlying uncertainties affect the snow depth exclusively in one direction. But looking at Figure 7, it just seems that there is significant noise on the CS2 heights, which goes in both directions (positive and negative).
L295: I haven’t fully understood why this test is done. “The test results show significant difference between in-situ sites which was also evident in the corresponding Cryo2Ice snow depths.” Which are the corresponding Cryo2Ice snow depths? I guess there are just a handful in the vicinity of each site?
L299: Related to the previous question: How many Cryo2Ice snow depths are you using for the comparison?
Figure 10: I suggest showing the “raw” distributions, not the density functions. Again, how many Cryo2Ice samples have been used at each site for the PDFs?
Figure 7: It would be also interesting to see the IS2 heights from the co-registration, averaged on the 300 m segments. Perhaps you can add them here?
Figure 8: I suggest adding the mean and standard variation from the in-situ measurements at the 4 sites.
L363: R**2 = 0.04 basically means no correlation I believe. But considering the noise level, especially from the CS2 heights, and the relatively low sample size, I wouldn’t expect a higher R here.
Citation: https://doi.org/10.5194/egusphere-2023-2509-RC1 -
AC1: 'Reply on RC1', Monojit Saha, 10 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2509/egusphere-2023-2509-AC1-supplement.pdf
-
RC2: 'Comment on egusphere-2023-2509', Anonymous Referee #2, 16 Dec 2023
This study presents a first estimation of snow depth from near-coincident spaceborne dual-frequency (laser and radar) altimetry over lead-less landfast ice in the Canadian Arctic Archipelago. The authors utilize one CRYO2ICE track (separated by 77 minutes and approximately 1.5 km), and estimates snow depth from ellipsoidal elevations after applying a tidal correction to the elevations. This tidal correction allows for derivation of snow depth from the difference in elevations assuming laser (ICESat-2/IS2) is reflected at the top of the snowpack/air-snow interface, and radar (CryoSat-2/CS2) is reflected at the snow-ice interface over lead-less sea ice. However due to the non-presence of leads, the impact of tides is accounted for through comparing the changes to water level caused by tides using the models from the satellite data with tide gauge estimates. The derived CRYO2ICE snow depth estimates (with negative snow depths removed) are compared with a dedicated ground-based campaign along the orbit.
An interesting paper that provides some interesting results to the dual-freq. snow depth approach – and, with the limited reference data available to compare with, especially along CRYO2ICE orbits, this paper can provide some interesting insights along this one orbit. The inclusion of much needed in situ reference data along such an orbit feed into some good discussion topics and the paper warrants publication. The paper read easily, although I have a few comments regarding the figures and tables, which I hope the authors will consider. Furthermore, I still have a few major points to be addressed before publication, as I believe there are more work required to ensure that CS2 and IS2 are comparable at the resolution used in the study, that the tide correction is applicable beyond just this example (and if not, that this is further discussed), and whether some of the assumptions of this study holds up.
Major comments
Tide correction and discussion relating this to different ice regimes/areas
I’ve yet to encounter this methodology before, but I am intrigued by it, and wonder how this may be applicable on a larger scale. However, I am not fully convinced by the correction applied as of now, was somewhat confused by it, and hope the authors could provide some more information regarding the ocean tides used in the study. You state that there is an average difference along track of 7.9 cm (on the co-registered points or? – and if so, how are the IS2 ocean tide of co-registered points even computed?) but the difference in water level from CHS was 6 cm. This is only compared to one point (tide gauge) vs. full along-track data. Could you provide a figure (maybe just in response to reviews, but potentially also in the manuscript) of the difference in ocean tide models along-track (maybe not using the average value, but maybe maximum and minimum too/or show distributions). This would also feed into your statement about the 1.9 cm correction (when compared with CHS water level) representing the systematic bias – this seems somewhat low, when we see that the ocean models may differ by +/- 12 cm (ranges +/- 50 cm vs +/- 62 cm) along the track. I think the study would greatly benefit from some more results and discussions on this.
In addition, I appreciate that the study investigates only the snow cover on landfast ice along the one track where you acquired ground observations. However, this methodology of applying a tide-correction is interesting for coastal and landfast sea ice beyond just the Canadian Archipelago. The manuscript would benefit from providing a greater perspective on applying this on a larger scale (across the Arctic, discussing whether ellipsoidal height differencing is better/equal to freeboard differencing etc.). If possible, would be interesting to see if it was possible in e.g., the Laptev Sea or similar – and also, in areas with no tide-gauges to compare with. Or at least a discussion about this would be very interesting.
Similarly, I believe that this work on landfast ice is important, but would have expected a discussion relating it to the bigger picture and sea ice in general (landfast and drifting) – could you provide some insights/relate it to e.g., deriving this in drifting sea ice in the Arctic/Antarctic? I acknowledge the very different sea ice/atmospheric patterns here, but I believe that this warrants some more discussion or at least, additional acknowledgements of the limitations/difficulties/differences when deriving this over landfast ice vs. drifting sea ice, beyond just the limitations regarding leads only.
Spatial scales, C2I snow depths and smoothing IS2 elevations
I believe there is more work to be done to ensure that differences in footprint and resolutions have been thoroughly considered. The fact that we are observing at much different scales (300 m x 1600 m vs. ATL07’s varying resolution), simply smoothing IS2 to 300 m does not seem sufficient. Also considering the average distance of IS2 and CS2 being 1.5 km, I believe there needs to be done more work on the IS2 data to ensure that they have processed to be “comparable”. I suggest having a look at the data from the other strong beams, to investigate the variability along the IS2 tracks and from here, you may actually also see if they are seeing “similar surfaces/distributions” which you’ve otherwise support using the Sentinel-1 data. In addition, providing some statistics on segment lengths from IS2 would be great to fully understand the coverage of IS2 when smoothing and co-registering to CS2.
In addition, I would have liked to see the semi-variogram (or just variogram) that you have created showing this spatial autocorrelation of 1 km of the in-situ snow depth distribution. That somewhat counters the assumption that IS2 and CS2 are seeing similar snow – or at least similar snow variability (as they are separated by 1.5 km). The assumption that smoothing IS2 to CS2’s along-track resolution of 300 m is sufficient is not well supported, as studies on along-track radar altimetry data usually apply some level of smoothing to the along-track radar data due to the noise (which is evident in your figure too). The CS2 data is – to a large extent – impacted by noise, off-nadir reflections (of ridges, leads and other) etc., which you see in Figure 7. This might also explain your “lack” of correlation with IS2 roughness. Ideally, we would expect less difference in the CS2 ellipsoidal heights along the orbit than in the laser if there is full penetration to the snow-ice interface, but that is not the case. A value of R2=0.04 basically states that there is no correlation. However, I also do not think – based on the ellipsoidal heights already shown – that we would expect this due to the noisy behavior of CS2. Another comparison you could use to identify what is primarily contributing to the variability in your snow depth estimates, is computing the correlation between IS2 ellipsoidal heights/C2I snow depth and CS2 ellipsoidal heights/C2I snow depth (or another measure of the CS2/IS2 elevations that are of smaller magnitudes than ellipsoidal heights along your track).
I do believe there is room for more speculation/discussion related to the C2I snow depths and how they compare with the in-situ data – just comparing mean/median seems … insufficient or at least, as if there might be more to discover, especially since the distributions are different even if the average values are similar. But I’ll leave that up to you to see if you think there is more to extract from these plots and/or analysis.
Minor comments
Figures:
- Tables in figures should be removed (following TC policy) – either make the tables into individual tables, or simply include the information in the legend/on the figure (since it is primarily average values and such).
- In addition, have a look at the resolution of the figures (e.g., Figure 11) and consider improving the DPI for better visualization.
- Consider using the same color for the line representing locations of different sites across figures – it will make comparison between figures (e.g., Figure 7, 8 and 11) easier. Also, double-check that your figures work for readers with color deficiencies.
Tables:
- In general, there are quite a few tables that provide limited insights which could just as easily have been mentioned in the figures and would likely provide some more connection having it next to e.g., the distribution figures. I encourage you to reconsider the number of tables you have, and whether the content of the tables could be put in the figures instead.
Data policy/statement:
“Available upon request” does not follow TC guidelines. Please provide a DOI for where to obtain the data – either as raw data, or as the processed data that you are presented in the manuscript. In addition, I am not able to open the link for the CryoSat-2 data – in addition, I have not encountered this website before (you did not use the science server or FTP site?). Please have a look at this link again to ensure that the link is active.
Specific comments
Line 71-73. Do you include low-confidence data (that is, ATL03 flag of low confidence or another ATL07 flag) across the entire track, or only close to the coast – or what is meant by low confidence? Could you provide a measure of the amount of low-confidence data or was it flagged somehow? And where this low confidence is along the track?
Line 74. Could you provide the distance to the other strong beams too?
Line 88. Provide which threshold is used by the re-tracker. Also, I thought it was known as the “CPOM” re-tracker? Maybe I’m wrong.
Line 105-110. Include abbreviations (Site 1 – S1; Site 1 Ridged; S1R etc.), as they are used later on in figures and text, but not typed out in the text.
Line 135. Is Kwok et al. (2020) deriving it from total freeboard and sea ice freeboard? I believe it is the radar freeboard of CryoSat-2.
Line 140. Ensure consistency with naming of h(IS2)/hIS2 and h(CS2)/hCS2 throughout text.
Line 157. MSS is not mentioned in section 2.6. Please do.
Line 167. Do you have a reference for the Moran’s I test?
Figure 3. You mention co-registration of heights. What is meant here – how are you specifically co-registering the observations? Also, ensure consistency in the naming of h(CS2)/hi(CS2).
Figure 4. I think that it is an interesting discussion and I like the idea of identifying comparable surfaces using Sentinel-1 backscatter. But I don’t think the full picture of what is shown in the figure is being discussed in the text. You state that mean values are similar, therefore the assumption is that the same surfaces are observed – even when the distributions differ with a bi-modal distribution of IS2 and higher amount of high backscatter (between -16 and -14dB) observed than for CS2. Instead of doing a distribution-to-distribution comparison, could you instead do it per smoothed, co-registered point and look at residuals of backscatter between the points? To see how much they vary and at which locations they do differ. In addition, the standard deviation lines do not make sense to me – why are they not separated by the same distance around the mean values?
Line 207. Consider re-phrasing this, as it reads as if Farrell et al. (2020) computed surface roughness from standard deviation of 300-m segments, where it in fact was 25-km segments.
Line 213-214. I don’t believe that Mallet et al. (2020) demonstrated that the use of fixed snow densities introduced significant biases in the snow depth retrieval, but rather significant biases in the sea ice thickness estimates. You are stating yourself, that the difference in snow density (to compute the refractive index) didn’t make much of an impact on the snow depth (Line 219-220).
Figure 5. The text is quite small on the figure. I do wonder about distance sampled vs. number of samples at Site 2 (and for the others as well), since it was stated that you sampled every 5 m? Why is there such a difference in sample distance? Also, remove the table from figure and either incorporate into figure or include as separate table. Also, provide some comments/thoughts about the fact that you don’t have an equal number of samples for each location.
Figure 6. Consider changing the set-up of the subplots, so it is 1 row and 2 columns, as it will allow you to compare the salinity and density as a function on depth more easily. Also, remove table (incorporate in to figure perhaps?).
Line 250. I do not believe negative snow depths should be removed – albeit not physically possible, they show the variation between IS2 and CS2 and provide insights in the differences between IS2 and CS2 too. In addition, it biases your statistics higher (which is already somewhat of an issue, since most of your C2I on average are smaller than the in-situ depths), so you should actually be observing an even bigger difference. Interesting that you do not see thicker snow than ~50 cm... I think you need a figure of the actual co-registered ATL07 smoothed vs CS2 along-track data, to truly see what is going on (as Figure 7 seems to show ATL07 in native resolution).
Figure 7. It is unclear in the legend, whether you’re looking at ATL07 heights (in their native resolution) or your 300-m smoothed ATL07 data. Also, consider making the colors comparable across figures (sites have different colors to the vertical lines across figures). In addition, why are we seeing 5-km averages – they are not used nor mentioned in the text? However, here you also see the variability of CS2 being more evident due to the majority of thin ice observations in IS2 data when smoothing at 5-km and CS2’s noise.
Line 264. What is meant by delineated with roughness? Please provide some more insights into this statement and reason for it.
Figure 8. Consider putting the latitude/snow depth plot on the y-axis of the image, so you can more easily compare it… Also, remember to include subplot numbers. In addition, perhaps I missed it, but how many C2I observations are used in this figure (and overall)? Somehow, these two plots make it look as if there are different number of observations shown.
Figure 10. Consider providing the specific information (bulk snow density) used to derive the snow depth of each site on the plot, for the reader to be reminded that for the site-specific calculations, different densities were used. Consider also providing information about the spread or similar in figure (or in other words, include Table 2 in Figure 10 as text).
Section 4.3. I really appreciate this section and discussion, very nice!
Line 330. “Radar heights can potentially be impacted by snow properties” – I would say that they certainly are!
Table 1. Why are there two values at time in the mean snow depth column? Not fully clear.
Line 356. “Found bias of 2-5 cm, we can expect 15-40% systematic biases”… Maybe I missed it, but this seemed to be skipped relatively quickly. Could you provide more insights here? Also, what about the contribution of random uncertainties?
Citation: https://doi.org/10.5194/egusphere-2023-2509-RC2 -
AC2: 'Reply on RC2', Monojit Saha, 10 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2509/egusphere-2023-2509-AC2-supplement.pdf
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
420 | 151 | 41 | 612 | 45 | 23 |
- HTML: 420
- PDF: 151
- XML: 41
- Total: 612
- BibTeX: 45
- EndNote: 23
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1