the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Equifinality Contaminates the Sensitivity Analysis of Process-Based Snow Models
Abstract. This study assesses the impact of different flux parameterizations and model parameters on simulations of snow depth. Through a sensitivity analysis in a process-based snow model based on the SUMMA framework, various options for parametrizing snow processes and adjusting parameter values were evaluated to identify optimal modeling approaches, understand sources of uncertainty, and determine reasons for model weaknesses. The study focused on model parameterizations of precipitation partitioning, liquid water flow, snow albedo, atmospheric stability, and thermal conductivity. In this study, sensitivity analysis (SA) is performed using the one-at-a-time (OAT) SA method as well as the Morris Method to estimate Elementary Effects, aiming to further explore the magnitudes and patterns of sensitivities. The sensitivity analyses in this study are used to evaluate process parameterizations, model parameter values, and model configurations. Performance metrics such as the Nash-Sutcliffe Efficiency (NSE), the Kling-Gupta Efficiency (KGE), the root mean squared log error (RMSLE), and mean are used to assess the similarity between simulated and observed data. Bootstrapping is employed to estimate the variability of mean Elementary Effects and establish confidence bounds. The key findings of this research indicate that sensitivity analysis of snow modelling parameters plays a crucial role in understanding their impact on decision outcomes. The study identified the most sensitive parameters, such as critical temperature and thermal conductivity of snow, as well as liquid water drainage parameters. It was observed that water balance fluxes exhibited higher sensitivity than energy balance fluxes in simulating snow processes. The analysis also highlighted the importance of accurately representing water balance processes in snow models for improved accuracy and reliability. A key finding in this study is that the sensitivity of performance metrics to model parameters is contaminated by equifinality (i.e., parameter perturbations lead to similar performance metrics for quite different snow depth time series), and hence many published parameter sensitivity studies may provide misleading results. These findings have implications for snow hydrology research and water resource management, providing valuable insights for optimizing snow modelling and enhancing decision-making.
- Preprint
(1461 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2023-3049', Steven Markstrom, 13 May 2024
Review of “Equifinality Contaminates the Sensitivity Analysis of Process-Based Snow Models” by Tek Kshetri, Amir Khatibi, Yiwen Mok, M Shahabul Alam, Hongli Liu, and Martyn P. Clark.
Thank you for the opportunity to review this article. I found it to be very well written and organized. I have a few general comments and then identify minor suggestions that I identify by line number below. If you have any questions about my review, please to not hesitate to contact me.
General comments:
The word “equifinality” is in the title and it is an important concept to this article, but because this term has such a history in the literature, I think that a better description of what is specifically meant by equifinality is required. For example, is “parameter insensitivity” and “equifinality” the same within the context of this article?
Likewise, there needs to be more in the discussion about equifinality in the results section. For example, in the paragraph at lines 304 through 314, there is some discussion of “differences in model sensitivity” to different parameters and the identification that the model seems to be more sensitive to “water balance” parameters than “energy balance” parameters. What does this imply about model structure when some parameters exhibit equifinality and others don’t? How could this information be used when modeling with SUMMA?
There is not mention of discretization and distribution of parameters and the simulated snowpack throughout space. Were the parameters varied spatially during the sensitivity analysis? Presumably the model simulates spatially varying snowpack (i.e., the snowpack is deeper in some locations than others) how did the objective function calculations account for this variability? Do the quality of the performance measures and their sensitivities vary according to how the depth of the pack vary across space? Maybe all of this is beyond the scope of the article, but some mention of how spatial variability is delt with is required.
Verify that the figure call outs in the text refer to the correct figure captions. I have identified some problems with this below.
Specific comments:
Ln. 221 This should reference figure 3 (not figure 4)
Ln. 264 This should reference figure 4, (not figure 5)
Ln. 286 This should reference figure 5 (not figure 6)
Ln. 277 “of” instead of “as”
Fig 4. Y axis label should be “Mean snow depth (m)”
Citation: https://doi.org/10.5194/egusphere-2023-3049-RC1 -
AC2: 'Reply on RC1', Tek Kshetri, 02 Jul 2024
Dear Steven Markstrom,
Thank you very much for your thorough review and insightful comments on our manuscript titled “Equifinality Contaminates the Sensitivity Analysis of Process-Based Snow Models.” We appreciate your positive feedback on the organization and writing of our article, and we are grateful for your suggestions to improve our manuscript. Below, we address your general comments and minor suggestions in detail. Please find our response in Blue color.
Best,
Tek Kshetri et al.
-
AC2: 'Reply on RC1', Tek Kshetri, 02 Jul 2024
-
RC2: 'Comment on egusphere-2023-3049', Francesca Pianosi, 05 Jun 2024
The paper presents a sensitivity analysis of a process-based snow model looking at how different choices of model parameter values and/or equations to represent snow processes impact snow depth simulations and accuracy with respect to observations available at a site in southwestern Idaho.
Overall, the manuscript is well written though some key aspects of the methodology applied need to be clarified (see point 2 and 3 below). Also, the contribution of the manuscript needs to be better articulated (point 1). I am not an expert of snow process modelling but, as an expert in sensitivity analysis of hydrological models more broadly, I do not find the key finding of the manuscript particularly novel or unexpected, so I think the authors should better articulate the specific contribution of their study and how it will help and inform other process-based snow modellers.
1) TITLE AND CONTRIBUTION
Starting from the title, the authors highlight as key contribution of their work that sensitivity analysis results “are contaminated” by equifinality, however it is not totally clear to me what this “contamination” means and what its implications are.
On L. 356 they say:
“the sensitivity of performance metrics to perturbations in model parameters is contaminated by equifinality. We illustrate some cases in this paper where parameter perturbations lead to similar performance metrics for quite different snow depth time series. Given that many published parameter sensitivity studies are based on the sensitivity of performance metrics to model parameters, the conclusions from many model sensitivity analysis studies may not be trustworthy.”
Now the fact that different parameter combinations lead to different simulated time series all associated with similar value of performance metric is not surprising - this is the very definition of equifinality. As for SA, even if a performance metric exhibits small variability, it can be subject to sensitivity analysis: in fact, as shown in Figure 5, also in this work it was possible to clearly estimate the relative importance of parameters in determining the (small) variability of the 3 performance metrics (RMSLE, KGE, NSE). The Figure shows that these sensitivities are a bit different – for example thermal conductivity has low sensitivity index for RMSLE and much higher for KGE, NSE – but again this is not surprising, as we know from SA literature that different metrics are sensitive to different parameters (e.g. see review in: https://doi.org/10.1016/j.earscirev.2019.04.006) which is also why it is good practice to use a range of metrics for model calibration and evaluation.
To be honest, in this case these differences do not look too dramatic either: while the parameter ranking is not exactly the same across the four panels in Figure 5, the screening results is not significantly changed (the first six parameters are important while the other six are uninfluential) and is also consistent with conclusion from visual inspection of simulated time series in Figure 3. So, I think the authors need to better articulate what they mean by the statement that “equifinality contaminates sensitivity analysis” and in what way “the conclusions from many model sensitivity analysis studies may not be trustworthy”. Maybe some specific examples from the snow modelling literature and previous SA applications to snow models would help to make the case of what are the new lessons learnt here.
2) MODEL AND EXPERIMENTAL SET-UP
The set-up of the sensitivity analysis and of the model itself needs to be clarified.
On line 143 the authors say that they will assess “the selection of parameterizations (i.e., equations used to parameterize specific processes), the selection of model parameters used in the parameterizations (i.e., the model equations), and the model discretization configurations”. However, the “discretization configurations” are not mentioned or reported ever again, and the “selection of parameterizations” is only applied to two processes (atmospheric stability and albedo) out of five modelled (see Table 1). So, this suggests to me that this is mostly a “conventional” sensitivity analysis of model parameters, with the additional assessment of different parameterisations for some of the modelled processes. If so, this needs to be clarified throughout the manuscript.
Also, on L. 89 the authors say “This study simulates snow processes for the period November 2005 to June 2006. This choice was made to reduce the computational effort required for modelling and analysis”. The sentence suggests the model has long run time, but it is not clear to me where this complexity comes from if the model only simulate vertical fluxes – as suggested by Fig. 2 – at one location (the Aspen site). Or maybe the model does simulate the snow depth over the entire spatial domain, but then if so, how relevant it is to look at simulations (and their sensitivity) in one location only? This needs clarification.
3) PERFORMANCE METRICS
In Table 2 and throughout the manuscript, the authors use the term “performance metric” to refer to all metrics, including the Mean (i.e. the average snow depth). However, differently from NSE, KGE and RMSLE, this statistic has nothing to do with the model performance (unless the Mean of simulated snow depth is compared to the mean of observations, which however does not seem to be the case from the equation in the last row of Table 2). This difference needs to be clarified. I’d suggest to use the term “performance metric” for NSE, KGE and RMSLE, and “output metric” for the Mean (this is often used in the Sensitivity Analysis literature for model outputs subject to SA which are based only on model simulations only).
The point should also be clarified in the results section.
On L. 247-248, the sentence “Overall, the performance results show a high degree of consistency between the simulated and observed snow depth data based on mean metric” is unclear. The consistency between simulated and observed data can be shown by the KGE, NSE and RMSLE metrics, but not the Mean metric. So, how did the authors get to that conclusion?
Similarly, on L. 270: “in cases where there is a discernible pattern in each parameter, it becomes possible to identify the optimum value that can lead to the highest level of agreement between the simulation and observation. For instance, the simulation of snow depth will improve for the values of the exponent for meltwater flow greater than 2” Again, how one can infer the level of agreement between the simulation and observation based on Figure 4, given it shows the (observations-free) mean of the simulated snow depth?
Last, on L. 178 the authors mention “flow simulation results” but I believe the metrics used here are calculated on snow depth simulations results, not flows. Or are NSE, KGE and RMSLE computed over flow simulations? (but then if so, what flow observations are being used? And what about the other processes and parameters that determine the streamflow at gauging station?). This really needs to be clarified.
MINOR
L. 26 “that sensitivity analysis of snow modelling parameters plays a crucial role in understanding their impact on decision outcomes”. This sentence is unclear (what “decision outcomes”?) please rephrase
L. 46 “interdisciplinary challenge” I would not say that “process-based hydrological modelling” is a particularly interdisciplinary work!
L. 53 “and different model parameters” suggest replacing “parameters” by “parameter values”
Figure 2 is unnecessarily large, consider reducing it. Also, using a consistent wording between this Figure and the first column of Table 1 would improve clarify (for example, I suppose “(partitioning)” in Fig. 2 refers to “Precipitation flux” in Table 1, “(Drainage parameterization” corresponds to “Liquid water in snowpack flux”, etc.)
L. 179 “altering a parameter influences the outcome of a decision”. Unclear, please rephrase.
Figure 4: units of measurements missing on vertical axis label
L. 282: “The variance in the ranking of parameters’ accuracy prediction by different performance metrics”. This sentence does not make much sense, please revise.
L. 286: “is shown in Figure 6” should be “Figure 5” (I guess)
L. 309: “the differences between the simulations with lowest thermal conductivity (blue line) have differences with the observations that are…” Convoluted/confusing sentence, please clarify.
Citation: https://doi.org/10.5194/egusphere-2023-3049-RC2 -
AC1: 'Reply on RC2', Tek Kshetri, 02 Jul 2024
Dear Francesca Pianosi,
Thank you very much for your thorough review and insightful comments on our manuscript titled “Equifinality Contaminates the Sensitivity Analysis of Process-Based Snow Models.” We appreciate your positive feedback on the organization and writing of our article, and we are grateful for your suggestions to improve our manuscript. In the attached document, we address your general comments and minor suggestions in detail. Please find our response in Blue color.
-
AC1: 'Reply on RC2', Tek Kshetri, 02 Jul 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
414 | 90 | 30 | 534 | 25 | 15 |
- HTML: 414
- PDF: 90
- XML: 30
- Total: 534
- BibTeX: 25
- EndNote: 15
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1