Improving the CLASSIC (v1.8) Snow Model to Better Simulate Arctic Snowpacks
Abstract. This study enhances the snow model of the Canadian Land Surface Scheme including Biogeochemical Cycles (CLASSIC), with a particular focus on Arctic environments. Key snow model physics improvements include adjustments to the thermal conductivity at the top of the first soil layer, a revised computation of the temperature at the snow–soil interface (hereafter, bottom snow temperature), and the addition of a windless exchange coefficient in sensible heat flux calculations. Arctic-specific adaptations include blowing-snow sublimation losses, a new snow compaction scheme, and a snow thermal conductivity parameterization. Evaluations at seven mid-latitude and alpine sites (SnowMIP sites) and three Arctic sites (Bylot Island, Umiujaq, and Trail Valley Creek) show that these enhancements improve the overall simulated snowpack characteristics and soil temperatures across both SnowMIP and Arctic sites. The revised bottom snow temperature yields better agreement between simulated and observed snow and bottom snow temperatures. The new windless exchange coefficient reduced the surface temperature RMSE from 3.50 °C to 1.93 °C on average across all sites. The improved snow compaction scheme reduces the snow depth biases from 12.0 cm to 0.1 cm on average across the Arctic sites and improves the simulated snow densities while not degrading the overall model performance at the SnowMIP sites. Blowing-snow sublimation had a negligible effect at most sites, except at the wind-exposed sites of Umiujaq, Trail Valley Creek, and Senator Beck, decreasing snow depth on average by 2–4 cm. Adding a new snow thermal conductivity parameterization—combined with all previous developments—reduces the RMSE of the simulated soil temperatures from 5.3 °C to 3.0 °C on average at all Arctic sites. Our new developments demonstrate the ability of a single-layer snow model to reasonably reproduce Arctic bulk snowpack characteristics, while maintaining good performance at the SnowMIP sites. Inherent uncertainties remain in the forcing datasets, especially due to the harsh Arctic environment characterized by strong winds, snow redistribution, frost, and polar night. Future model developments will focus on spatial-scale simulations across the whole Arctic, with particular attention to snow cover fraction parameterizations to better capture sub-grid-scale spatial heterogeneity in Arctic environments.
General comments:
This study evaluates potential snow model enhancements for the CLASSIC model, with a particular focus on Arctic environments. The potential enhancements focus on the thermal conductivity of the surface soil, a revised computation of the temperature at the snow–soil interface, and the addition of a windless exchange coefficient in sensible heat flux calculations. Arctic-specific adaptations include blowing snow sublimation losses, a new snow compaction scheme, and snow thermal conductivity updates. Detailed comparisons between simulations with differing physics representations and observations of snow depth, SWE, snow albedo, surface temperature, and soil temperature indicate that the model enhancements improve model accuracy. However, additional analyses could make these results more robust, comprehensive, and meaningful. In particular, there is value in considering additional evaluation metrics, reporting the statistical significance of results, and more deeply considering whether the model updates are improving accuracy for the right reasons. Overall, this study will be valuable, particularly for understanding potential avenues for improving CLASS snow modeling, but major revisions are recommended before publication.
Specific comments:
Line 124: Please include a statement noting the potential uncertainties associated with using this rain–snow partitioning threshold.
Line 137: Please describe or provide a reference for the quality control and undercatch correction procedures.
Section 2.5: Please provide details regarding the model spatial resolution and how it may impact comparisons with point-based measurements. This is discussed later, but it would be valuable to address it here as well. For example, vegetation and snow fraction at the point scale are binary.
What uncertainties are associated with the model forcing inputs?
Section 2.6: Please include a sentence explaining how the observed snow albedo is not impacted by surrounding vegetation. With a snow depth threshold of 10 cm, it seems that albedo observations may still be influenced by vegetation.
Please also include correlation as an evaluation metric to capture how well the model simulates temporal variability at a daily time step.
Section 3.1:
It is difficult to determine the primary takeaways while reading through this section. It would be valuable to first present the overarching takeaways and then discuss the details of the individual metrics site by site.
It could also be valuable to show the results from Figure 3 as a multisite average in addition to the individual site comparisons.
Why do the simulations show significant snow depth at SNB in July when SWE is near zero?
When discussing skill qualitatively (e.g., “good”), please support these statements with skill metrics (RMSE and correlation).
Please note whether differences in evaluation metrics between simulations are statistically significant.
Please discuss the coherence and relationships between changes in skill scores across variables. For example, are the default snow depth and SWE biases correlated? Are changes in the biases of these variables correlated? Do reduced biases in albedo correspond with reduced biases in surface temperature and soil temperature? Are temperature and albedo biases physically consistent with SWE and snow depth biases?
There is also value in evaluating whether interannual variability is improved in key metrics (e.g., correlation of simulated vs. observed peak SWE, day of disappearance, spring ablation rates, mean annual SWE, etc.).
Section 4.1: How do errors between the LSM and site observations compare with those reported in previous studies (e.g., Lin et al., 2025; Abolafia-Rosenzweig et al., 2022,2024; Chen et al., 2014; etc.)?
https://doi.org/10.1175/JHM-D-24-0082.1; https://doi.org/10.1029/2022MS003141; https://doi.org/10.1029/2023MS003869; https://doi.org/10.1002/2014JD022167
In the context of combined uncertainties from point-scale comparisons, forcing uncertainties, and observational uncertainties, are the changes in model skill significant?
Technical correction:
Remove the comma on line 431.