the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Retrieving Snow Water Equivalent from airborne Ku-band data: The Trail Valley Creek 2018/19 Snow Experiment
Abstract. Snow is an important freshwater resource that impacts the health and well-being of communities, the economy, and sustains ecosystems of the cryosphere. This is why there is a need for a spaceborne Earth observation mission to monitor global snow conditions. Environment and Climate Change Canada, in partnership with the Canadian Space Agency, is developing a new Ku-band synthetic aperture radar mission to retrieve snow water equivalent (SWE) at a nominal resolution of 500 m, and weekly coverage of the cryosphere. Here, we present the concept of the SWE retrieval algorithm for this proposed satellite mission. It is shown that by combining a priori knowledge of snow conditions from a land surface model, like the Canadian Soil Vegetation Snow version 2 model (SVS-2), in a Bayesian model, we can retrieve SWE with an RMSE of 15.8 mm (16.4 %) with an uncertainty of 34.8 mm (37.7 %). To achieve this accuracy, a larger uncertainty in the a priori grain size estimation is required, since this variable is known to be underestimated within SVS-2 and has a considerable impact on the microwave scattering properties of snow. It is also shown that adding four observations from different incidence angles improves the accuracy of the SWE retrieval because these observations are sensitive to different scattering mechanisms of the snowpack. These results validate the mission concept of the proposed Canadian satellite mission.
Competing interests: Some authors are members of the editorial board of journal The Cryosphere.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.- Preprint
(6996 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-2317', Anonymous Referee #1, 09 Jul 2025
In this paper, the authors expanded an originally developed Bayesian-based SWE estimation algorithm to a new Ku-band radar sensor applied in Canada. Several important improvements were made.
The achieved SWE estimation accuracy was below 20 mm, and could be improved to 15.8 mm if three additional observation angles are provided.The authors made great efforts to explicitly describe the influence of the prior mean and variance on the accuracy of the retrieval results and their MCMC-estimated chain uncertainties. They also described the ability of MCMC and a proper constraint setting to correctly characterize a layered snow stratigraphy. The discussions are in-depth, and most of them are correct.
I have only the following suggestions to post.
Major:
1. In abstract, what is the physical snow RT model utilized to describe the backscattering in four incidence angles?
2. It is suggested to provide a false-color image, a DEM, and a land cover in addition to the backscatter image in Figure 1.
3. For Section 2.3, it reads unclear whether the SVS-2 simulation dataset is a full region map or one that covers only several individual points. Additionally, the description of the forcing dataset contradicts itself between line 126 (SM in Figure 1) and line 131 (neighboring weather stations).
4. Can SVS-2 consider wind compaction and effectively model the wind slab layer for snow in Canada?
5. For lines 147-153, does it mean that all sites in Figure 1 use the same single snow profile as the prior? How did you determine the variance of the prior distribution?
6. Line 233: What does "top 30 ensemble members" mean? Are they the first 30 members closest to the study area, or those most similar to the measured snow profile?
7. Line 253: Could you use equations to describe the idea of DEMCZ for guiding the direction of chain evolution?
8. Lines 266-270: The methodology for implementing the constraints can be mentioned here.
9. Lines 335-337: Using the Arctic version of SVS-2 simulations, the uncertainties of MCMC retrieval results are reduced when the prior s.t.d. is reduced by narrowing down from all ensembles to the top 30 ensembles. This is reasonable.
Additionally, in Table 1, is the s.t.d. of Hsnow(R) from the top 30 members (Arctic version) 9.5 or 0.95?
Actually, when comparing Fig.8(b) and Fig.7(b), I did not observe a reduction in the uncertainty (i.e., the range of the error bars on the Y-axis). Could you check the values?
10. For the low correlation of the retrieved SWE to the measured SWE in Figs. 7 and 8, could you check the correlation between SWE (or SD) and the original radar observation inputted to MCMC? Are they highly-correlated or scattered?
11. The corresponding content of (a) and (b) is not labeled in Figure 9.
12. Lines 341-346: This result indicates the considerable impact of the grain size prior on SWE retrieval—not due to the accuracy of the mean, but rather due to the tolerance that allows the MCMC retrieval system to better match the observations. Increasing the variance of grain size indirectly enhances the influence of radar observations on the retrieval.
13. Did Figure 10 utilize a single-angle radar backscatter, as in Figures 8 and 9?
14. Line 405: "other variables like thickness" -> Actually, I think what you really meant might be strategraphy, or strategraphy of layer thicknesses.
15. Lines 406-408: "It should be noted that when SWE is poorly estimated by the prior, the posterior SWE estimate has a higher error (Figure 8), where SWE estimates are concentrated around the initial modeled SWE and do not diverge from that initial." -> The accuracy of the prior SWE does influence the SWE retrieval, and this is truly reflected in the likelihood calculation. However, for the case in Figure 10(a), I think the key point is that the default SVS-2 gives a highly-underestimated bottom-layer SSA (i.e., overestimated grain size); with a small variance, the system is forced to trust this value excessively. This resulted in the underestimation of SWE. Additionally, the default SWE prior is underestimated and has a low variance. This helped to make things worse, slightly.
The key point is not to overtrust the land surface model, allowing remote sensing to correct it. Trusting a wrong prior too much is the last thing to do, especially for radar-sensitive parameters.16. Lines 428-430: "Similarly, when comparing the outputs from both SVS-2 versions, the prior density estimates for the R layer of the default version (Figure 10a), do not allow to sample values close to the measured ρsnow, which prevents the MCMC method to properly sample other variables, such as SSA for the same layer." -> I do not fully agree that the snow density influences SSA; rather, I think SSA influences itself. Or, they both influence both. This is because, in general, the sensitivity of radar signal to snow density is low.
17. Line 500: It also indicates that remote sensing and land surface model can work together to mutually improve their accuracies. Ku-band radar is sensitive to snow depth and the SSA of the depth hoar layer, which can provide important information in regions with sparse measurements.
Minor:
1. In the abstract, uncertainty should be described more clearly to distinguish it from the RMSE compared with in-situ data. For example, use the MCMC-estimated retrieval uncertainty.
2. In the caption of Figure 3, second line: but->by.
3. Line 489: We also show -> We would also expect?
4. Line 503: "that influence the most the radar sigma0"—maybe change to "that influence the radar sigma0 the most"?Citation: https://doi.org/10.5194/egusphere-2025-2317-RC1 -
AC1: 'Reply on RC1', Benoit Montpetit, 09 Aug 2025
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2025/egusphere-2025-2317/egusphere-2025-2317-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Benoit Montpetit, 09 Aug 2025
-
RC2: 'Comment on egusphere-2025-2317', Micheal Durand, 14 Jul 2025
Review of “Retrieving Snow Water Equivalent from airborne Ku-band data:The Trail Valley Creek 2018/19 Snow Experiment” by Montpetit et alThe authors present a study applying a Bayesian retrieval algorithm to estimate snow water equivalent, applied using the SMRT radiative transfer model, and priors derived from the SVS-2 land surface model, with airborne radar observations from the University of Massachusetts radar instrument, and intensive field data from snowpits. The algorithm achieves retrievals with RMSE less than 30 mm across several different experiments; as 30 mm threshold required for the proposed Terrestrial Snow Mass Mission (TSMM), these results are highly significant. The results do some really intriguing work exploring the sensitivity of the retrievals to prior estimates and their uncertainty, which will be of great value to the community. The results present a very interesting contribution to the literature of using these type of retrievals, and should be published directly. Adding one more aspect of the analysis in discussion is, in my opinion, critical to really understanding these results, however.Retrieving SWE from radar measurements is an under constrained problem. To solve this, Bayesian methods optimally weigh priors estimates of snow properties and the radar observations. Prior configuration is a condition for obtaining meaningful results, and it’s not clear whether the prior is accurately configured here. Specifically, the prior uncertainty (implicitly) specified for SWE appears to be set to be set too low. Thus, I recommend redoing the analyses with higher SWE uncertainty (see Major Comment #1). This should fit nicely with what the authors have already done in exploring the impact of the SSA uncertainty.All other aspects of the analysis are really nicely described; the manuscript does a nice job dealing with the issues of both SSA bias and setting the SSA uncertainty too low, linking these in with model simulations specifically adapted for Arctic environment: this is a critical aspect of solving the SWE retrieval problem, and presents a significant advance over other recent efforts. Adding some analysis with more realistic SWE uncertainty will help to provide the context needed to understand the results shown here.Major Comments1.The configuration of the SWE uncertainty in the SVS-2 model simulation ensemble and its impact on the results needs to be further discussed. The model is run in a “point” simulation mode, and the resulting ensemble of 120 members (sometimes down-selected to 30 members) is used to configure the prior. This ensemble is from a previous study (Woolley et al. 2024). The ensemble-derived prior is then applied to retrieve SWE at multiple spatial locations, but as far as I can tell, the total SWE simulated in the model is far too precise an estimate when applied as a prior in these various locations. The prior uncertainty appears to be shown in the width of the red rectangles in Figures 7-9; so in Figure 7a, e.g., the prior uncertainty appears to be 10 mm or less; Fig 7b it appears to be 5 mm, on the order of 5-10%. The uncertainty for the other experiments is similar. While the study does not quantify the spatial variability of snowpack SWE, in situ SWE appears (in e.g. Fig 7a) to range from around 60 mm to around 125 mm. Since this SWE from the simulation is being applied as a prior on each location, the uncertainty in simulated SWE should really be closer to the snowpit SWE spatial variability, so 50 mm (or more if you allow for bias). So, using the ensemble as an estimate of the SWE at these snowpits represents an almost order of magnitude mis-match in the uncertainty (~5-10 mm vs ~50 mm). This mis-match can have significant implications on the validity of the retrieval - basically the retrieval is implicitly assuming the prior estimates of SWE are already known to a high degree of precision. The fact that the SWE priors are not explicit, but rather represented as layer thickness and density can be navigated by increasing the respective uncertainties to make the resulting SWE uncertainty closer to the snowpit SWE standard deviation. In summary, I think it’s key to address this issue needs to be addressed head-on in the paper, and to include an additional experiment (or revising one of the other experiments) with this increase to SWE uncertainty.2.It is really remarkable that there is so little variability among the SWE retrievals. Fig 7a shows that all the retrievals except one seem to fall in a range of 75±5 mm, when the snowpits range from 60 to 125 mm. At first glance, this appears to be totally expected, and a result of having the SWE uncertainty set so low: if you start with a prior SWE that’s known to within 5%, you’re not likely to change that very much, regardless of the measured backscatter value. However, Fig 7a also shows that the retrieval is adjusts fairly significantly, moving down from ~90 mm to 75 mm. Other experiments show less change in the average retrieved SWE (the Arctic SVS-2, the top 30 ensemble members, the increased SSA uncertainty, and the multiple observation angles), but across all six experiments (Fig 7a, 7b, 8a, 8b, 9a, 9b) there is almost no variability across the retrieved SWE values. Is this due to lack of variability in the observed backscatter? Does the retrieved soil properties from Montpetit et al. 2024 play a role? Please discuss some possible reasons for this lack of much variability among the retrievals in the manuscript.Minor Comments1.Line 64: Please add that for these Bayesian methods, it’s key to correctly specify the SWE uncertainty, which in prior studies was done by specifying layer thickness and density uncertainties.2.Line 92, Figure 1: Please comment on the difference in backscatter between the two images. Presumably these are made at different incident angles? At first blush, it looks as if nearly the same scene produced significantly different backscatter, so may need some additional discussion.3.Line 92, Figure 1: the combination of using a point-scale SVS-2 simulation and the reference to the Canadian High Resolution Deterministic Prediction System were a little confusing. I thought at first that CHRDPS was used in the modeling setup. I recommend just removing that grid and mention of CHRDPS if that data is not used in the study4.Clarify what “top 30” means in methods, the first time it’s introduced (line 233) not in results (line 330). Is this derived to be different for each snowpit, or is it the same 30 for all snowpits? Is it based only on SWE, or on SWE and SSA or other properties? Is it e.g. the 30 SWE values closest to the true average snowpit SWE, e.g.?5.Line 226: Please note that when deriving SWE priors from the ensemble, the resulting prior SWE uncertainty is important, and cannot be directly obtained from the model ensemble estimates of layer thickness and density uncertainty, due to possible ensemble correlations of these quantities.6.Line 237, Table 1: Please add the SWE ensemble standard deviation and mean values here.7.Line 245: Define all symbols in equation 1. Should also note the confusing aspect of σ being used to represent both uncertainty, and backscatter.8.Line 278, Figures 3, 4 and 5. Calling the red line the “posterior” is sure to create confusion, as “posterior” is usually meant to be an output of the retrieval, and this is a normal distribution with mean and standard deviation derived from the snowpits9.Line 280, section 4.1: it is critical to provide the basic statistical summaries of the snowpit SWE. How variable was the SWE? This must be quantified, given the study objectives.10.Line 280, section 4.1: it is critical to provide the statistical summary of the simulated SWE. This can’t be easily retrieved from the depth and density histograms, because they may be cross-correlated. The little red boxes in Figure 7-9 do not provide enough quantitative information, given that the paper is aimed at SWE accuracy.11.Line 280, section 4.1: From comparing the various figures, it looks as if the simulation underestimates total depth, and overestimates total density, and ends up with a slight bias in SWE. Please comment on this and clarify these biases12.Line 280, section 4.1: Please directly compare the measured snowpit SWE and the ensemble SWE, and discuss implications of SWE uncertainty estimation on the retrieved SWE.13.Line 315, Figure 7: To me, it looks like the basic result that the high bias in SSA leads to an underestimation of scattering which then leads the retrievals to bias the SWE low (Fig 7a). However, this bias does not persist in all other scenarios; the Arctic adapted SVS-2 has less bias in SSA and leads to less bias in SWE. It would be nice to comment further on this.14.Line 315, Figure 7, 8, 9: the error bars on the measurements are confusing. In this context, error bars like this imply an uncertainty in the measured value at the snowpit. However, if I understand, these are taken as the spatial variability of the measured values, which I don’t think is the same as the uncertainty. The uncertainty is how accurately you think you measured SWE at each place, whereas spatial variability is how much SWE varies from place to place. Please derive a reasonable SWE uncertainty and change the error bars15.Line 315, Figure 7, 8 9: the error bars on the model seems to have been drawn wrong on figure a? The dashed line appears at the bottom of the shaded area, but I think the dashed line should be in the middle of the shaded box?16.Line 320: It’s unclear what SWE uncertainty means in this context. Is this derived from the Markov Chains? It almost reads as if this quantity is being calculated across the retrievals at the various snowpits, but that is spatial variability rather than uncertainty, which is not the same thing. If uncertainty were calculated at each snowpit, then you’d have a different uncertainty for each snowpit, as you ought, and so the Table 2 could summarize the average uncertainty, which may be what is there; but in any case, please clarify!17.Line 350, comparing Figure 8 vs Figure 9: why don’t the error bars look the same here? In Figure 8a, the default SVS-2 looks much wider than Figure 8b. But as I read the paper, this shouldn’t change when we go to Figure 9, correct? But Figure 9 looks much smaller?18.Line 360: I found Figure 10 a bit confusing. So this is at a single pit? In that case, the snowpits pdf is not a reasonable thing to show, right? Why not just show the observation at that particular pit? Same comment in Figure 11, why not show the measured value rather than the median over all pits? Those are not very relevant except this one. Similarly, the text refers to bias which is usually an average over many error samples. But in this case, you should just have one error, which is equal to the estimate of each quantity from the MCMC and the measured value.19.Line 373: What do min and max values mean for each iteration in Figure 11?20.Line 373: Please show the prior values in Figure 11.Citation: https://doi.org/
10.5194/egusphere-2025-2317-RC2 -
AC2: 'Reply on RC2', Benoit Montpetit, 09 Aug 2025
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2025/egusphere-2025-2317/egusphere-2025-2317-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Benoit Montpetit, 09 Aug 2025
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
531 | 99 | 18 | 648 | 11 | 25 |
- HTML: 531
- PDF: 99
- XML: 18
- Total: 648
- BibTeX: 11
- EndNote: 25
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1