Identifying Dominant Parameters Across Space and Time at Multiple Scales in a Distributed Model Using a Two-Step Deep Learning-Assisted Time-Varying Spatial Sensitivity Analysis
Abstract. Distributed models require parameter sensitivity analyses that capture both spatial heterogeneity and temporal variability, yet most existing approaches collapse one of these dimensions. We present a two-step, deep learning-assisted, time-varying spatial sensitivity analysis (SSA) that identifies dominant parameters across space and time. Using SWAT for runoff simulation of the Jinghe River Basin, we first apply the Morris method with a spatially lumped strategy to screen influential parameters and then perform SSA using a deep learning-assisted Sobol' method for quantitative evaluation. A key innovation lies in the systematic sensitivity evaluation with parameters represented and analysed at both subbasin and hydrologic response unit (HRU) scales, enabling explicit treatment of distributed parameters at their native spatial resolutions. To reduce computational burden, two multilayer perceptron surrogates were trained for 195 subbasin and 2,559 HRU parameters, respectively, allowing efficient time-varying SSA of NSE-based Sobol' indices over 3- and 24-month rolling windows during 1971–1986. Results reveal structured, scale-dependent controls: spatially, sensitivity hotspots are coherent between scales but become more localized at the HRU level, reflecting heterogeneity in land use, soils, and topography; temporally, sensitivities fluctuate with runoff in the 3-month window, while event-scale variations are smoothed in the 24-month window, yielding more persistent patterns governed by storage and routing processes. The proposed framework provides a computationally efficient and unified approach for identifying scale-dependent sensitivity hotspots and hot moments, thereby supporting targeted calibration and enhancing the interpretability and predictive robustness of distributed models under nonstationary conditions.
Space-time varying sensitivity analysis is an ongoing topic for research, given its potential value for deeper analysis of distributed models on the one side, and it’s very high computational demand on the other side. The study presented is thus relevant for the community.
My main points for revisions are the Discussion section – which does not compare the current results with previous findings, and a lack of robustness analysis and testing of the influence of choices and assumptions. These can both be rectified. Please see may detailed comments below.
Main comments:
[1] Any sequential strategy, in which different methods are used in series, depends on early steps not overly constraining the outcome of later steps. For example, in this case, ensuring that parameters are not eliminated that might later become relevant when assessed in a distributed manner. How can it be ensured that this problem does not occur in this sequential application proposed here?
[2] In this case study, have the authors tested whether the distributed sensitivity analysis is changing when different parameters are kept after the first stage? E.g. by trying to pursue step 2 with all parameters for smaller test cases?
[3] In how far have the authors tested the “performance” of the MLP in terms of consistency of sensitive parameters?
[4] Have the authors estimated confidence limits on the resulting sensitivity indices to ensure that their analyses have converged? It is difficult to assess the robustness of the results without such convergence tests. (e.g. Sarrazin et al. 2016 https://doi.org/10.1016/j.envsoft.2016.02.005)
[5] Why do you see such strong differences in results for sub-basins and HRUs? If it is potentially due to the different number of parameters, then can this not be tested and confirmed?
[6] Please check again for spelling issues. E.g. in the caption of Fig. 5 “show lagged Spearman’s rank correlations (r) between and runoff”, there is a word missing after between.
[7] The Discussion section is a good start, but it is currently not fulfilling its actual role. It is meant to discuss the results of this specific study in the context of previous studies. However, section 4.1 just reviews the results, while section 4.2 makes some references to potential future explorations. So, the current 4.1 should be part of the results section. In section 4, the authors need to discuss how their findings different (or not) from previous findings regarding the sensitivity of the SWAT parameters. Did they find new influences of processes compared to other studies? Did the different approach yield different results? Etc. Also, what did the authors find in their methodology compared to previous space-time varying analyses? The authors made some different choices and assumptions, how did this influence the results and findings?