the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Exploring the sensitivity to precipitation, blowing snow, and horizontal resolution of the spatial distribution of simulated snow cover
Abstract. Accurate snow cover modeling is a high stake for mountain regions. Alpine snow evolution and spatial variability result from a multitude of complex processes including interactions between wind and snow. The SnowPappus blowing snow model was designed to add blowing snow modeling capabilities to the SURFEX/Crocus simulation system for applications across large spatial and temporal extents. This paper presents the very first spatialized evaluation of this simulation system over a 902 km2 domain in the French Alps. Here we compare snow cover simulations to the spatial distribution of snow height obtained from Pleiades satellites stereo-imagery and to Snow Melt-Out Dates from Sentinel-2 time series over three snow seasons. We analyzed the sensitivity of the simulations to three different precipitation datasets and two horizontal resolutions. The evaluations are presented as a function of elevation and landform types. The results show that the SnowPappus model forced with high-resolution wind fields enhances the snow cover spatial variability at high elevations allowing a better agreement with observations above 2500 m and near peaks and ridges. Model improvements are not obvious at low to medium altitudes where precipitation errors are the prevailing uncertainty. Our study illustrates the necessity to consider error contributions from blowing snow, precipitation forcings, and unresolved subgrid variability for robust evaluations of spatialized snow simulations. Despite the significant effect of the unresolved spatial scales of snow transport, 250 m horizontal resolution snow simulations using SnowPappus are found to be a promising avenue for large-scale modeling of alpine snowpacks.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(21941 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(21941 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2604', Anonymous Referee #1, 12 Jan 2024
This paper has carried out the validation of simulated snow cover distribution in alpine areas using a combination of state of the art techniques. Also, some concerns I had while reading are already explained in the discussion chapter. In my opinion, there are no problems with publishing the paper in TC. I have summarized some of the concerns in the minor comments below. This is not a requirement for acceptance. Please use them as a reference to improve the content in the revised draft.
minor comments
Fig. 5: In case of “with transport”, there are some areas with locally large snow depths. This is probably a depression or downwind slope, but are these localized areas of high snow cover consistent with those observed by the satellite? If it has also been confirmed at visual level or already verified by Baron et al (2023) etc., it would be good to have a mention of this. In section 4.3.3 (L626-635), it is written that the simulation shows a larger snow depth in the depression, it would be good to write whether the large snow depth area is consistent with the satellite as well.
3.1.1 Figure 6: Figure 6 shows the results of the bias and other validation results. Also, I thought it would be easier to visualize the degree of agreement if there was a scatter diagram from the snow depth data at each location, with the snow depth from the satellite on the horizontal axis and the simulation snow depth on the vertical axis.
L392-395 As the result of Figure 9, the difference between years states that 2018-19 showed wide spatial variability, but what are the differences between these two years in terms of characteristics of the weather conditions? For example, was it a windier winter in 2018-19?
Fig. 12: It compares simulation data and satellite data in 2017-2018 and 2018-2019. Comparing these figures, 2017-2018 (a-c) and 2018-2019 (d-f) seems to be the same figure, is it possible that you have misplaced the figures?
Citation: https://doi.org/10.5194/egusphere-2023-2604-RC1 - AC2: 'Reply on RC1', Ange Haddjeri, 27 Mar 2024
-
RC2: 'Comment on egusphere-2023-2604', Anonymous Referee #2, 12 Jan 2024
The work of Haddjeri et al. focuses on improved modelling of snow in mountainous terrain, an important topic and one with a large room for improvement. This research is of significance to snow hydrological modelling, water resource management, avalanche risk, ecology, environmental research, and climate projections; topics of growing in importance in the face of anthropogenic climate change. Baron et al., 2023 introduced SnowPappus, a new blowing snow model intended to enable simulations across large spatial and temporal extents that better account for the heterogenous spatial distribution of snow cover caused by complex topography. This study uses remote sensing data to perform the first spatialised evaluation of SnowPappus; the only realistic way to perform a spatially distributed analysis of model performance in such terrain. (It is great to see this issue of spatially distributed validation, raised in comments on Baron et al., 2023, addressed in such a timely manner.) As clearly stated in their abstract, Haddjeri et al. explore how simulations performance in different elevation bands and landform categories are influence by three different precipitations forcing datasets and two different spatial resolutions. The results indicate that the introduction of SnowPappus does provide simulations with more physically realistic spatial patterns of snow cover that agree with our understanding of wind-driven snow redistribution processes. This is demonstrated through increased spatial variability of simulated at high elevations and on peaks and ridges, and largely decreasing the snow height in these regions.
Overall, I enjoyed reading this manuscript which has a nice introduction and some novel and valuable results. I believe the content is appropriate for the journal and I would recommend for publication if the two major comments listed below are addressed. However, I think some of the writing and presentation could be improved quite substantially to help reinforce your point and better disentangle the impact of precipitation forcing vs snow transport (which you repeatedly state are hard to distinguish). I think it would be a shame to publish ‘as is’ because I think there is a lot more value that could be gained from these results with a small amount of additional analysis.
My review starts with some general comments on this study, followed by major comments which describe areas that I think need addressing before publication or would require revising major sections. After this, I chronologically work through the paper providing specific minor comments which I think would improve the manuscript, and then finally any specific technical comments. Whilst reading this review, please note the limitations in that I have not used Crocus, AROME, SAFRAN, or ANTILOPE; as such, my comments are generalised and focus more on methods/discussion/presentation of results than specific technical model details.
General comments:
The novelty of this work lies in the introduction and assessment of a new model architecture, none of the results are particularly surprising or ground-breaking. Comparisons are performed at 250 m scales; it is subject to debate whether this is ‘snow-drift-permitting’ but, in any case, it is no surprise that the key driver of spatial variability at these scales is the precipitation forcing dataset. Despite this, it is clear to see the significant added value of the SnowPappus blowing snow model that arises at high elevations and on ridges and peaks/summits.
One problem I have with the manuscript is that, to me, it seems you haven’t quite decided on the ultimate goal of your study or the result which you really want to highlight, I find this dilutes the really interesting and valuable results held within. There are many points that you are claiming to address: spatially evaluate the SnowPappus model, analyse how precipitation forcing impacts models, analyse how spatial resolution impacts models, analyse how snow simulations can be assessed by landform or by elevation bands. There is a lot going on and I think the manuscript in its current structure does not aid the digestion of all this information. I think a clearer separation between these (e.g. separate figures for the analysis of spatial resolution and precipitation forcing) and the addition of some combined metrics quantifying the results would help a lot (e.g. a metric combining the results of all four precipitation forcings to directly compare with transport against without transport would help to disentangle precipitation biases from the snow transport results).
Sometimes you suggest you are trying to identify the optimal precipitation forcing dataset, e.g. P17 L 334 and L359. However, I don’t think this is a key objective of the study (and not something that is found in the study), and therefore not worth including because it weakens the message you are trying to get across. Along similar lines, I think there is too much focus on the impact of the precipitation forcing and not enough emphasis on the impacts of the snow transport model, which is I believe the intended focus point of this study. In some ways, it feels like the analysis of precipitation forcings is unnecessary for the assessment of SnowPappus, we already know that the precipitation forcing data will control simulated snow, it just adds another (albeit interesting) aspect to the paper that detracts from the key point you are trying to make: SnowPappus IS improving the representation of blowing snow at 250 m resolution.
Major comments:
- Fig 12. Comparison of Sentinel-2 SMODs with simulations for 2017-2018 and 2018-2019.
- Has there been a mistake? Results seem to be identical for 2017-2018 and 2018-2019
- Fig 12. cannot make sense of text describing results (L468-474). Presumably because figure is incorrect.
- One major problem I have with this manuscript is the presentation of the results and consequently confusing discussion. I think the figures used could be improved and results quantified with combined metrics to (1) better support your arguments, make the results clearer/easier to discuss, potentially provide further insight that is not currently obvious, and (2) ease understanding for reader and give others greater confidence in your results. You repeatedly state it is hard to disentangle impacts of precipitation forcing and snow transport and I think this is partly due to the way you have present your results. Here are a few thoughts on how I think you could change your results section to help with your analysis:
- You do not quantify your results at all. If you introduce some combined metrics which you could quote in your discussion, it would improve clarity and give the reader much more confidence in your results. Several examples are provided in the minor comments, e.g. P31 L621 a comparison of the mean SPS scores for landforms vs elevation bands would clearly quantify that the landform SPS values are more homogeneous.
- I like that your figures 6,8,10,11,12 are all the same format and summarise a lot of information. These could be supplemented with figures displaying combined results, e.g. combined SPS of all precipitation forcings so that you are just showing the impact of with transport vs without transport. This could help disentangle precipitation from snow transport.
- Providing plots (e.g. scatter) of observations vs simulation would show nicely how correlated the simulation results are to the observations. This could then be quantified with e.g. Pearson correlation coefficient.
If you decide to stick with the current figures, some minor comments on improving the current format are provided below under the results section in ‘Minor/specific comments’.
Minor/specific comments:
Title
- I think the title is a bit vague especially the word ‘exploring’. “Analysing the sensitivity of a blowing snow model (SnowPappus) to precipitation forcing, blowing snow, and spatial resolution”.
1 Introduction
Overall, I really like the introduction and think it has a good structure. I think a few parts could be reworded for clarity.
- Statement in both introduction P1 L8-10 “The results show that the SnowPappus model forced with high-resolution wind fields enhances the snow cover spatial variability at high elevations allowing a better agreement with observations above 2500 m and near peaks and ridges.” and conclusion that the addition of SnowPappus provides better agreement with observations and yet the results do not show this; it is stated in the manuscript several times that this is subject to the bias in precipitation forcing. I see that you are just trying to make a generalised statement, which your results do somewhat support, however, I think is slightly inaccurate and should be replaced by a less definitive statement e.g. “SnowPappus provides more physically realistic spatial patterns of simulated snow distribution expected due to the influence of blowing snow mechanisms in the mountains”.
- P2 L40 “However, the quantification of the precipitation is still impacted by important uncertainties.” I think you should at least briefly try to explain what ‘important uncertainties’ you are referring to.
- P2 L40-45 I think this needs rewording, it isn’t clear what you are trying to say, especially the last sentence (“Eventually, the representation of post-depositional processes, notably blowing snow transport, benefits from the development of dedicated high-resolution models coupled to or forced by atmospheric models.”) given you refer to it several times in the next paragraph. On that note, P2 L46 “The spatial evaluation of this last type of system is a challenge in itself.” I don’t think you should refer to the last paragraph (‘this last type of system’) without more explanation. It makes it unclear and hard to follow.
2 Data and Methods
- P5 L125-126 I like the decomposition into landforms. I know you have 250m pixels, but is there really nowhere in the entire French Alps and Pyrenees that is classified as flat, shoulder, or footslope?
- Inaccuracies in remote sensing datasets (Pleiades and S2 Theia snow collection). These are mentioned briefly in passing. But I think it would be worth specifically mentioning the uncertainty in the ‘observations’ you are assessing the model performance against.
- I’m unsure why you needed to use the very high-resolution (and expensive) Pleiades imagery for this experiment given you are resampling to 250 m anyway, and even the SAFRAN HR is 30 m. With Sentinel-2 offering free, open-access imagery at 10 m (20 m including the SWIR), was there much benefit form using this expensive, super high-res imagery? As stated in your discussion, P30 L603 “Unsimulated sub-grid processes are captured by the observation [Pleiades], which complexifies the analysis.” So why use such high resolution and complicate analysis?
- P8 L174 Precipitation phase threshold. If this is not the method of precipitation retrieval used in the operational AROME NWP then this seems like an unfair comparison, especially if you are trying to decipher the optimal precipitation forcing (which you state a few times e.g. P17 L358-359). I understand for comparison purpose keeping the threshold consistency between forcing data is a sensible thing to do. Perhaps don’t say that you are looking for the optimal precipitation forcing dataset because, as far as I understand, this is not the objective of the study.
- P11 L236-237 “A reference snow-free surface elevation is acquired during the summer.” I was unable to decipher what is used as your reference surface elevation for Pleiades snow heights. I think it would be worth clearly stating this.
- P11 L252 no justification for 70% threshold
- P11 It is unclear if your Pleiades snow height maps are 0.5 m or 2 m resolution? Might be worth clarifying.
3 Results
- Jumping between figures sometimes makes text hard to follow. Consider describing one at a time.
- P13 First paragraph in section 3.1.1 is not specific to Grandes Rousses area, perhaps this should be moved out of this subsection (e.g. just above the heading for this section). Personally, I think information of the Galibier region should be added to fig 4.
- P17 L334 “Thus, no optimal precipitation forcing can be identified systematically.” Did you really expect to with this study? You could get a combined metric of the best of these precipitation forcings for the test region. Unsure if analysis of precipitation forcing datasets is necessary for the point you are trying to make with this paper.
- P17 L335 “ANTILOPE forcing leads to the simulated snow height the closest to the observations for both dates above 2200 m” this isn’t completely true. On 16 March in 2800 to 3100 m elevation band ANTILOPE is not closest.
- P18 L366 “However, no precipitation forcing leads to the simulation of the most unbiased snow heights for all areas and evaluation dates.” Unclear how you got to this conclusion without quantifying with a combined metric for each area. If you did this, it would be worth stating the results.
- P18 L369 “The addition of snow transport increases and improves the simulated standard deviation” quantify this, i.e. how much closer adding snow transport bring results to the Pleiades standard deviation.
- P20 section 2.3.1 states that it is about “differences between years” when it is only a concerning the differences in variability between years. Much more could be included here.
- P20 L403 “On average, the spatial variance of SMOD is underestimated except for simulations with AROME forcing at the two lowest elevation bands where the σ ratio is found close to 1.” It is unclear to me how this conclusion is drawn.
- P21 L 404-406 “It is interesting to note that in the lowest elevation band Fig. 9 (b), an AROME pixel never has a snow height greater than the SMOD threshold during the year (see Sect. 2.8.1) and is therefore given 405 a 0-day SMOD value, contrary to what happens for all the other simulations and the observation.” This is not shown in figure 9, the AROME distribution does not stretch to 0, why?
- Why did you choose to have a figure for the distribution of snow heights and SMOD for elevation intervals but not for landform subgroups?
- P25 I found the content concerning figure 12 confusing. Likely because of the accidental use of the same plots for 2017-2018 and 2018-2019.
- Following on from my ‘major comment’ on figures, I have outlined why I struggled with the current figures and a few areas I think could be improved if you choose to stick with the current format. A specific and repeated comment is the figures are too small to see or overcrowded.
- Fig 1 personal preference, but the inverse colourmap is more intuitive in my opinion (light as high elevation, dark as low elevation).
- Fig 4 and C1
- please explain weird scalebar
- hard to see smaller submaps, consider using difference maps to aid comparison.
- Fig 4, if this is specific to analyse the results for Grandes Rousses you do not need the entire domain showing. If it is to visualise the entire area, why have you not shown the Galibier area?
- Areas that are masked are still included in the plot. You are not using these masked areas in your analysis, so is there a justification for including it? If not it is just unnecessary noise and should be removed. I understand it aids your description of the impacts of SnowPappus e.g. P13 L293, however, you have decided to mask these regions for several valid reasons, and therefore it is not a reliable result to draw conclusions from.
- Fig 5,7,9 - VIOLIN PLOTS….
- Size needs to be increased dramatically. I strongly recommend taking the height of the entire page if sticking with this format, so that the distributions can be interpreted. In print I can barely see the hatching, the white dots for median values disappear entirely. Even on my monitor I have to zoom in massively to decern what you are discussing.
- I feel the figure needs more explanation, even if to you it is self-explanatory why make it hard for someone who doesn’t use violin plots. No mention that white dot is median and black box with first and third quartile (presumably). No mention of what the plot is showing (I know it might be obvious but still worth stating), presumably a frequency density of data. N is presumably the number of sample pixels in that subgroup.
- Fig 6,8,10,11,12 – comparison metrics
- No need to remove acronyms in legend.
- Orange and red not easily distinguishable, especially in print. Consider more contrasting colours.
4 Discussion
- Discussion could be condensed to convey your key points more clearly and concisely.
- Substantial regions of your domain are being masked. Some of these are the regions that are most affected by blowing snow, e.g. topographic shadows. I understand this is often done given the limitations of both models and remote sensing products in these regions, but I think it would be worth mentioning how this could have influenced the results. E.g. there is a substantial amount of snow on glaciers, and topographically shaded regions are where the snow will be redeposited by wind blow processes and therefore be deepest and last longest.
- P25 section ‘4.1 Sources of snow spatial variability’. When comparing with Sentinel-2 SMODs for 2017-2018 and 2018-2019 I think it is worth mentioning if there were any dramatic differences in precipitation and wind in those years which could impact the spatial variability.
- P29 L 586-587 “unrealistic snow interactions simulated at the borders of such masked areas” do you need to buffer the masked area then?
- P30 first paragraph. As a non-expert in this model, I am unsure where the soil properties come into play and if it is worth mentioning. Seems a bit out of place.
- P31 L621 “SPS scores are found more homogeneous and smaller using landforms grouping than elevation bands” to me this conclusion isn’t sufficiently explained and is hard to see flicking between the different graphs. This is another example of where I think some comparative metrics could really help support your statements, e.g. a comparison of the mean SPS scores for landforms vs elevation bands would clearly quantify that the landform SPS values are more homogeneous.
5 Conclusions
- P31 L646 “The addition of snow transport to simulations provides a better estimate of snow height across elevations” What does ‘better’ mean? If you mean closer to the observations, I don’t think you can say this without quantifying it. If you mean ‘more physically realistic regardless of precipitation forcing’, then say that.
- P32 L654 “relatively close”, vague and unclear. Quantifying this would give me much more confidence in your results.
Technical comments:
- P25 L464-465 replace “mainly limited in areas…” to “most significant in areas..”
- P3 L60 delete “way”
- P3 L73-74 “In order to assess the value of a distributed snow model in the presence of uncertainty in the meteorological forcing, this uncertainty has to be accounted for to raise robust conclusions.” Bad phrasing. Consider replacing with e.g. “To robustly assess the value of a distributed snow model, the uncertainty in meteorological forcing must be considered”.
- P8 L147 Grammar/typo “in the 10^5 km2”.
- P8 L172 grammar/typo “… we only consider in our simulations only the precipitation amount from AROME.” Change to “ we only consider the precipitation amount from AROME in our simulations.” Or similar.
- P10 L208 “SBSM- like equations” don’t use acronyms without explaining.
- P11 L245 Don’t start paragraph with “Because…”.
- P11 L 248 – 249 “…reduces the vertical snow height standard error between 0.3 and 0.4 m.” missing ‘to’ or ‘by’ so it is unclear to the non-specialist if this is the error on the observations or the reduction in error.
- P12 L275
- P12-13 I think Eq3 should be better integrated in text.
- P14 L314-315 “The mean simulated snow height for the 2500 to 2800m elevation band (1.90 m) is consistent with observations on 13 May 2019.” Seems like an out of place statement.
- P17 L336 “…(Fig.7 and Fig. 8)…” figures referred to unnecessarily.
- P17 L342 “(from 0.16 m for SAFRAN to 0.32 m for AROME)” clarify if this is the standard deviation or the magnitude of underestimation of standard deviation.
- P20 L380 “(orange dashed and continuous lines are less distant than green dashed and continuous 380 lines)” discussing figures without referring to which ones.
- P25 L461 “not shown here” Don’t talk about results that you are not showing.
- P25 L465 “the impact on SMOD is mainly limited in areas classified as ’Peak (summit)’ or ’Ridge’.” I think ‘in’ needs to be replaced by ‘to’.
- P25 L474 “standard deviation radio around 0.5-0.75.” typo and consider replacing ‘around’ with ‘of’. “standard deviation ratio of 0.5-0.75.”
- P25 L480 “On the contrary to Fig. 11, the SAFRAN HR does not have the lowest SPS values”, the opposite is true, SAFRAN HR has the highest SPS. Worth stating.
- P29 L567 “On the other hand…” remove or reword.
- P31 L623 “Thus for Grandes Rousses and Galibier areas, we can again conclude that the observed snow height variability due to elevation is better captured than the variability due to landform.” I think this is misworded. Unless I’ve misunderstood, the opposite is true: landform variability is better captured that elevation.
- P31 L 641 “and in between two snow seasons” confusing statement, should be “between two dates in different snow seasons”. ‘in between’ suggests it was dates outside of the snow season.
Citation: https://doi.org/10.5194/egusphere-2023-2604-RC2 - AC1: 'Reply on RC2', Ange Haddjeri, 27 Mar 2024
- Fig 12. Comparison of Sentinel-2 SMODs with simulations for 2017-2018 and 2018-2019.
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2604', Anonymous Referee #1, 12 Jan 2024
This paper has carried out the validation of simulated snow cover distribution in alpine areas using a combination of state of the art techniques. Also, some concerns I had while reading are already explained in the discussion chapter. In my opinion, there are no problems with publishing the paper in TC. I have summarized some of the concerns in the minor comments below. This is not a requirement for acceptance. Please use them as a reference to improve the content in the revised draft.
minor comments
Fig. 5: In case of “with transport”, there are some areas with locally large snow depths. This is probably a depression or downwind slope, but are these localized areas of high snow cover consistent with those observed by the satellite? If it has also been confirmed at visual level or already verified by Baron et al (2023) etc., it would be good to have a mention of this. In section 4.3.3 (L626-635), it is written that the simulation shows a larger snow depth in the depression, it would be good to write whether the large snow depth area is consistent with the satellite as well.
3.1.1 Figure 6: Figure 6 shows the results of the bias and other validation results. Also, I thought it would be easier to visualize the degree of agreement if there was a scatter diagram from the snow depth data at each location, with the snow depth from the satellite on the horizontal axis and the simulation snow depth on the vertical axis.
L392-395 As the result of Figure 9, the difference between years states that 2018-19 showed wide spatial variability, but what are the differences between these two years in terms of characteristics of the weather conditions? For example, was it a windier winter in 2018-19?
Fig. 12: It compares simulation data and satellite data in 2017-2018 and 2018-2019. Comparing these figures, 2017-2018 (a-c) and 2018-2019 (d-f) seems to be the same figure, is it possible that you have misplaced the figures?
Citation: https://doi.org/10.5194/egusphere-2023-2604-RC1 - AC2: 'Reply on RC1', Ange Haddjeri, 27 Mar 2024
-
RC2: 'Comment on egusphere-2023-2604', Anonymous Referee #2, 12 Jan 2024
The work of Haddjeri et al. focuses on improved modelling of snow in mountainous terrain, an important topic and one with a large room for improvement. This research is of significance to snow hydrological modelling, water resource management, avalanche risk, ecology, environmental research, and climate projections; topics of growing in importance in the face of anthropogenic climate change. Baron et al., 2023 introduced SnowPappus, a new blowing snow model intended to enable simulations across large spatial and temporal extents that better account for the heterogenous spatial distribution of snow cover caused by complex topography. This study uses remote sensing data to perform the first spatialised evaluation of SnowPappus; the only realistic way to perform a spatially distributed analysis of model performance in such terrain. (It is great to see this issue of spatially distributed validation, raised in comments on Baron et al., 2023, addressed in such a timely manner.) As clearly stated in their abstract, Haddjeri et al. explore how simulations performance in different elevation bands and landform categories are influence by three different precipitations forcing datasets and two different spatial resolutions. The results indicate that the introduction of SnowPappus does provide simulations with more physically realistic spatial patterns of snow cover that agree with our understanding of wind-driven snow redistribution processes. This is demonstrated through increased spatial variability of simulated at high elevations and on peaks and ridges, and largely decreasing the snow height in these regions.
Overall, I enjoyed reading this manuscript which has a nice introduction and some novel and valuable results. I believe the content is appropriate for the journal and I would recommend for publication if the two major comments listed below are addressed. However, I think some of the writing and presentation could be improved quite substantially to help reinforce your point and better disentangle the impact of precipitation forcing vs snow transport (which you repeatedly state are hard to distinguish). I think it would be a shame to publish ‘as is’ because I think there is a lot more value that could be gained from these results with a small amount of additional analysis.
My review starts with some general comments on this study, followed by major comments which describe areas that I think need addressing before publication or would require revising major sections. After this, I chronologically work through the paper providing specific minor comments which I think would improve the manuscript, and then finally any specific technical comments. Whilst reading this review, please note the limitations in that I have not used Crocus, AROME, SAFRAN, or ANTILOPE; as such, my comments are generalised and focus more on methods/discussion/presentation of results than specific technical model details.
General comments:
The novelty of this work lies in the introduction and assessment of a new model architecture, none of the results are particularly surprising or ground-breaking. Comparisons are performed at 250 m scales; it is subject to debate whether this is ‘snow-drift-permitting’ but, in any case, it is no surprise that the key driver of spatial variability at these scales is the precipitation forcing dataset. Despite this, it is clear to see the significant added value of the SnowPappus blowing snow model that arises at high elevations and on ridges and peaks/summits.
One problem I have with the manuscript is that, to me, it seems you haven’t quite decided on the ultimate goal of your study or the result which you really want to highlight, I find this dilutes the really interesting and valuable results held within. There are many points that you are claiming to address: spatially evaluate the SnowPappus model, analyse how precipitation forcing impacts models, analyse how spatial resolution impacts models, analyse how snow simulations can be assessed by landform or by elevation bands. There is a lot going on and I think the manuscript in its current structure does not aid the digestion of all this information. I think a clearer separation between these (e.g. separate figures for the analysis of spatial resolution and precipitation forcing) and the addition of some combined metrics quantifying the results would help a lot (e.g. a metric combining the results of all four precipitation forcings to directly compare with transport against without transport would help to disentangle precipitation biases from the snow transport results).
Sometimes you suggest you are trying to identify the optimal precipitation forcing dataset, e.g. P17 L 334 and L359. However, I don’t think this is a key objective of the study (and not something that is found in the study), and therefore not worth including because it weakens the message you are trying to get across. Along similar lines, I think there is too much focus on the impact of the precipitation forcing and not enough emphasis on the impacts of the snow transport model, which is I believe the intended focus point of this study. In some ways, it feels like the analysis of precipitation forcings is unnecessary for the assessment of SnowPappus, we already know that the precipitation forcing data will control simulated snow, it just adds another (albeit interesting) aspect to the paper that detracts from the key point you are trying to make: SnowPappus IS improving the representation of blowing snow at 250 m resolution.
Major comments:
- Fig 12. Comparison of Sentinel-2 SMODs with simulations for 2017-2018 and 2018-2019.
- Has there been a mistake? Results seem to be identical for 2017-2018 and 2018-2019
- Fig 12. cannot make sense of text describing results (L468-474). Presumably because figure is incorrect.
- One major problem I have with this manuscript is the presentation of the results and consequently confusing discussion. I think the figures used could be improved and results quantified with combined metrics to (1) better support your arguments, make the results clearer/easier to discuss, potentially provide further insight that is not currently obvious, and (2) ease understanding for reader and give others greater confidence in your results. You repeatedly state it is hard to disentangle impacts of precipitation forcing and snow transport and I think this is partly due to the way you have present your results. Here are a few thoughts on how I think you could change your results section to help with your analysis:
- You do not quantify your results at all. If you introduce some combined metrics which you could quote in your discussion, it would improve clarity and give the reader much more confidence in your results. Several examples are provided in the minor comments, e.g. P31 L621 a comparison of the mean SPS scores for landforms vs elevation bands would clearly quantify that the landform SPS values are more homogeneous.
- I like that your figures 6,8,10,11,12 are all the same format and summarise a lot of information. These could be supplemented with figures displaying combined results, e.g. combined SPS of all precipitation forcings so that you are just showing the impact of with transport vs without transport. This could help disentangle precipitation from snow transport.
- Providing plots (e.g. scatter) of observations vs simulation would show nicely how correlated the simulation results are to the observations. This could then be quantified with e.g. Pearson correlation coefficient.
If you decide to stick with the current figures, some minor comments on improving the current format are provided below under the results section in ‘Minor/specific comments’.
Minor/specific comments:
Title
- I think the title is a bit vague especially the word ‘exploring’. “Analysing the sensitivity of a blowing snow model (SnowPappus) to precipitation forcing, blowing snow, and spatial resolution”.
1 Introduction
Overall, I really like the introduction and think it has a good structure. I think a few parts could be reworded for clarity.
- Statement in both introduction P1 L8-10 “The results show that the SnowPappus model forced with high-resolution wind fields enhances the snow cover spatial variability at high elevations allowing a better agreement with observations above 2500 m and near peaks and ridges.” and conclusion that the addition of SnowPappus provides better agreement with observations and yet the results do not show this; it is stated in the manuscript several times that this is subject to the bias in precipitation forcing. I see that you are just trying to make a generalised statement, which your results do somewhat support, however, I think is slightly inaccurate and should be replaced by a less definitive statement e.g. “SnowPappus provides more physically realistic spatial patterns of simulated snow distribution expected due to the influence of blowing snow mechanisms in the mountains”.
- P2 L40 “However, the quantification of the precipitation is still impacted by important uncertainties.” I think you should at least briefly try to explain what ‘important uncertainties’ you are referring to.
- P2 L40-45 I think this needs rewording, it isn’t clear what you are trying to say, especially the last sentence (“Eventually, the representation of post-depositional processes, notably blowing snow transport, benefits from the development of dedicated high-resolution models coupled to or forced by atmospheric models.”) given you refer to it several times in the next paragraph. On that note, P2 L46 “The spatial evaluation of this last type of system is a challenge in itself.” I don’t think you should refer to the last paragraph (‘this last type of system’) without more explanation. It makes it unclear and hard to follow.
2 Data and Methods
- P5 L125-126 I like the decomposition into landforms. I know you have 250m pixels, but is there really nowhere in the entire French Alps and Pyrenees that is classified as flat, shoulder, or footslope?
- Inaccuracies in remote sensing datasets (Pleiades and S2 Theia snow collection). These are mentioned briefly in passing. But I think it would be worth specifically mentioning the uncertainty in the ‘observations’ you are assessing the model performance against.
- I’m unsure why you needed to use the very high-resolution (and expensive) Pleiades imagery for this experiment given you are resampling to 250 m anyway, and even the SAFRAN HR is 30 m. With Sentinel-2 offering free, open-access imagery at 10 m (20 m including the SWIR), was there much benefit form using this expensive, super high-res imagery? As stated in your discussion, P30 L603 “Unsimulated sub-grid processes are captured by the observation [Pleiades], which complexifies the analysis.” So why use such high resolution and complicate analysis?
- P8 L174 Precipitation phase threshold. If this is not the method of precipitation retrieval used in the operational AROME NWP then this seems like an unfair comparison, especially if you are trying to decipher the optimal precipitation forcing (which you state a few times e.g. P17 L358-359). I understand for comparison purpose keeping the threshold consistency between forcing data is a sensible thing to do. Perhaps don’t say that you are looking for the optimal precipitation forcing dataset because, as far as I understand, this is not the objective of the study.
- P11 L236-237 “A reference snow-free surface elevation is acquired during the summer.” I was unable to decipher what is used as your reference surface elevation for Pleiades snow heights. I think it would be worth clearly stating this.
- P11 L252 no justification for 70% threshold
- P11 It is unclear if your Pleiades snow height maps are 0.5 m or 2 m resolution? Might be worth clarifying.
3 Results
- Jumping between figures sometimes makes text hard to follow. Consider describing one at a time.
- P13 First paragraph in section 3.1.1 is not specific to Grandes Rousses area, perhaps this should be moved out of this subsection (e.g. just above the heading for this section). Personally, I think information of the Galibier region should be added to fig 4.
- P17 L334 “Thus, no optimal precipitation forcing can be identified systematically.” Did you really expect to with this study? You could get a combined metric of the best of these precipitation forcings for the test region. Unsure if analysis of precipitation forcing datasets is necessary for the point you are trying to make with this paper.
- P17 L335 “ANTILOPE forcing leads to the simulated snow height the closest to the observations for both dates above 2200 m” this isn’t completely true. On 16 March in 2800 to 3100 m elevation band ANTILOPE is not closest.
- P18 L366 “However, no precipitation forcing leads to the simulation of the most unbiased snow heights for all areas and evaluation dates.” Unclear how you got to this conclusion without quantifying with a combined metric for each area. If you did this, it would be worth stating the results.
- P18 L369 “The addition of snow transport increases and improves the simulated standard deviation” quantify this, i.e. how much closer adding snow transport bring results to the Pleiades standard deviation.
- P20 section 2.3.1 states that it is about “differences between years” when it is only a concerning the differences in variability between years. Much more could be included here.
- P20 L403 “On average, the spatial variance of SMOD is underestimated except for simulations with AROME forcing at the two lowest elevation bands where the σ ratio is found close to 1.” It is unclear to me how this conclusion is drawn.
- P21 L 404-406 “It is interesting to note that in the lowest elevation band Fig. 9 (b), an AROME pixel never has a snow height greater than the SMOD threshold during the year (see Sect. 2.8.1) and is therefore given 405 a 0-day SMOD value, contrary to what happens for all the other simulations and the observation.” This is not shown in figure 9, the AROME distribution does not stretch to 0, why?
- Why did you choose to have a figure for the distribution of snow heights and SMOD for elevation intervals but not for landform subgroups?
- P25 I found the content concerning figure 12 confusing. Likely because of the accidental use of the same plots for 2017-2018 and 2018-2019.
- Following on from my ‘major comment’ on figures, I have outlined why I struggled with the current figures and a few areas I think could be improved if you choose to stick with the current format. A specific and repeated comment is the figures are too small to see or overcrowded.
- Fig 1 personal preference, but the inverse colourmap is more intuitive in my opinion (light as high elevation, dark as low elevation).
- Fig 4 and C1
- please explain weird scalebar
- hard to see smaller submaps, consider using difference maps to aid comparison.
- Fig 4, if this is specific to analyse the results for Grandes Rousses you do not need the entire domain showing. If it is to visualise the entire area, why have you not shown the Galibier area?
- Areas that are masked are still included in the plot. You are not using these masked areas in your analysis, so is there a justification for including it? If not it is just unnecessary noise and should be removed. I understand it aids your description of the impacts of SnowPappus e.g. P13 L293, however, you have decided to mask these regions for several valid reasons, and therefore it is not a reliable result to draw conclusions from.
- Fig 5,7,9 - VIOLIN PLOTS….
- Size needs to be increased dramatically. I strongly recommend taking the height of the entire page if sticking with this format, so that the distributions can be interpreted. In print I can barely see the hatching, the white dots for median values disappear entirely. Even on my monitor I have to zoom in massively to decern what you are discussing.
- I feel the figure needs more explanation, even if to you it is self-explanatory why make it hard for someone who doesn’t use violin plots. No mention that white dot is median and black box with first and third quartile (presumably). No mention of what the plot is showing (I know it might be obvious but still worth stating), presumably a frequency density of data. N is presumably the number of sample pixels in that subgroup.
- Fig 6,8,10,11,12 – comparison metrics
- No need to remove acronyms in legend.
- Orange and red not easily distinguishable, especially in print. Consider more contrasting colours.
4 Discussion
- Discussion could be condensed to convey your key points more clearly and concisely.
- Substantial regions of your domain are being masked. Some of these are the regions that are most affected by blowing snow, e.g. topographic shadows. I understand this is often done given the limitations of both models and remote sensing products in these regions, but I think it would be worth mentioning how this could have influenced the results. E.g. there is a substantial amount of snow on glaciers, and topographically shaded regions are where the snow will be redeposited by wind blow processes and therefore be deepest and last longest.
- P25 section ‘4.1 Sources of snow spatial variability’. When comparing with Sentinel-2 SMODs for 2017-2018 and 2018-2019 I think it is worth mentioning if there were any dramatic differences in precipitation and wind in those years which could impact the spatial variability.
- P29 L 586-587 “unrealistic snow interactions simulated at the borders of such masked areas” do you need to buffer the masked area then?
- P30 first paragraph. As a non-expert in this model, I am unsure where the soil properties come into play and if it is worth mentioning. Seems a bit out of place.
- P31 L621 “SPS scores are found more homogeneous and smaller using landforms grouping than elevation bands” to me this conclusion isn’t sufficiently explained and is hard to see flicking between the different graphs. This is another example of where I think some comparative metrics could really help support your statements, e.g. a comparison of the mean SPS scores for landforms vs elevation bands would clearly quantify that the landform SPS values are more homogeneous.
5 Conclusions
- P31 L646 “The addition of snow transport to simulations provides a better estimate of snow height across elevations” What does ‘better’ mean? If you mean closer to the observations, I don’t think you can say this without quantifying it. If you mean ‘more physically realistic regardless of precipitation forcing’, then say that.
- P32 L654 “relatively close”, vague and unclear. Quantifying this would give me much more confidence in your results.
Technical comments:
- P25 L464-465 replace “mainly limited in areas…” to “most significant in areas..”
- P3 L60 delete “way”
- P3 L73-74 “In order to assess the value of a distributed snow model in the presence of uncertainty in the meteorological forcing, this uncertainty has to be accounted for to raise robust conclusions.” Bad phrasing. Consider replacing with e.g. “To robustly assess the value of a distributed snow model, the uncertainty in meteorological forcing must be considered”.
- P8 L147 Grammar/typo “in the 10^5 km2”.
- P8 L172 grammar/typo “… we only consider in our simulations only the precipitation amount from AROME.” Change to “ we only consider the precipitation amount from AROME in our simulations.” Or similar.
- P10 L208 “SBSM- like equations” don’t use acronyms without explaining.
- P11 L245 Don’t start paragraph with “Because…”.
- P11 L 248 – 249 “…reduces the vertical snow height standard error between 0.3 and 0.4 m.” missing ‘to’ or ‘by’ so it is unclear to the non-specialist if this is the error on the observations or the reduction in error.
- P12 L275
- P12-13 I think Eq3 should be better integrated in text.
- P14 L314-315 “The mean simulated snow height for the 2500 to 2800m elevation band (1.90 m) is consistent with observations on 13 May 2019.” Seems like an out of place statement.
- P17 L336 “…(Fig.7 and Fig. 8)…” figures referred to unnecessarily.
- P17 L342 “(from 0.16 m for SAFRAN to 0.32 m for AROME)” clarify if this is the standard deviation or the magnitude of underestimation of standard deviation.
- P20 L380 “(orange dashed and continuous lines are less distant than green dashed and continuous 380 lines)” discussing figures without referring to which ones.
- P25 L461 “not shown here” Don’t talk about results that you are not showing.
- P25 L465 “the impact on SMOD is mainly limited in areas classified as ’Peak (summit)’ or ’Ridge’.” I think ‘in’ needs to be replaced by ‘to’.
- P25 L474 “standard deviation radio around 0.5-0.75.” typo and consider replacing ‘around’ with ‘of’. “standard deviation ratio of 0.5-0.75.”
- P25 L480 “On the contrary to Fig. 11, the SAFRAN HR does not have the lowest SPS values”, the opposite is true, SAFRAN HR has the highest SPS. Worth stating.
- P29 L567 “On the other hand…” remove or reword.
- P31 L623 “Thus for Grandes Rousses and Galibier areas, we can again conclude that the observed snow height variability due to elevation is better captured than the variability due to landform.” I think this is misworded. Unless I’ve misunderstood, the opposite is true: landform variability is better captured that elevation.
- P31 L 641 “and in between two snow seasons” confusing statement, should be “between two dates in different snow seasons”. ‘in between’ suggests it was dates outside of the snow season.
Citation: https://doi.org/10.5194/egusphere-2023-2604-RC2 - AC1: 'Reply on RC2', Ange Haddjeri, 27 Mar 2024
- Fig 12. Comparison of Sentinel-2 SMODs with simulations for 2017-2018 and 2018-2019.
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
363 | 152 | 30 | 545 | 22 | 20 |
- HTML: 363
- PDF: 152
- XML: 30
- Total: 545
- BibTeX: 22
- EndNote: 20
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Ange Haddjeri
Matthieu Baron
Matthieu Lafaysse
Louis Le Toumelin
César Deschamp-Berger
Vincent Vionnet
Simon Gascoin
Matthieu Vernay
Marie Dumont
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(21941 KB) - Metadata XML