the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Sensitivities of subgrid-scale physics schemes, meteorological forcing, and topographic radiation in atmosphere-through-bedrock integrated process models: A case study in the Upper Colorado River Basin
Abstract. Mountain hydrology is controlled by interacting processes extending from the atmosphere through the bedrock. Integrated process models (IPM), one of the main tools needed to interpret observations and refine conceptual models of the mountainous water cycle, require meteorological forcing that simulates the atmospheric process to predict hydroclimate then subsequently impacts surface-subsurface hydrology. Complex terrain and extreme spatial heterogeneity in mountainous environments drive uncertainty in several key considerations in IPM configurations, and require further quantification and sensitivity analyses. Here, we present an IPM using the Weather Research and Forecasting (WRF) model coupled with an integrated hydrologic model, ParFlow-CLM, implemented over a domain centered over the East River Watershed (ERW), located in the Upper Colorado River Basin (UCRB). The ERW is a heavily-instrumented 300 km2 region in the headwaters of the UCRB near Crested Butte, CO, with a growing atmosphere-through-bedrock observation network. Through a series of experiments in water year 2019 (WY19), we use four meteorological forcings derived from commonly used reanalysis datasets, three subgrid-scale physics scheme configurations, and two terrain shading options within WRF to test the relative importance of these experimental design choices on key hydrometeorological metrics including precipitation, snowpack, as well as evapotranspiration, groundwater storage, and discharge simulated by the ParFlow-CLM. Results reveal that sub-grid scale physics configuration contributes to larger spatiotemporal variance in simulated hydrometeorological conditions, whereas variance across meteorological forcing with common sub-grid scale physics configurations is more spatiotemporally constrained. For example, simulated discharge shows greater variance in response to the WRF simulations across subgrid-scale physics schemes (26 %) rather than meteorological forcing (6 %). Topographic radiation option has minor effects on the watershed-average hydrometeorological processes, but adds profound spatial heterogeneity to local energy budgets (+/-30 W/m2 in shortwave radiation and 1 K air temperature differences in late summer). The findings from this study provide guidance on an IPM setup that most accurately represents atmospheric-through-bedrock hydrometeorological processes and can be used to guide future modeling and fieldwork in mountainous watersheds.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(5324 KB)
-
Supplement
(2125 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5324 KB) - Metadata XML
-
Supplement
(2125 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-437', Anonymous Referee #1, 01 Aug 2022
Referee comments on:
Title: Sensitivities of subgrid-scale physics schemes, meteorological forcing, and topographic radiation in atmosphere-through-bedrock integrated process models: A case study in the Upper Colorado River Basin
Author(s): Zexuan Xu et al.
MS No.: egusphere-2022-437
MS type: Research articleSummary
This study evaluates the influence of different meteorologic forcing (based on different reanalysis datasets), subgrid-scale physics schemes, and terrain shading on simulated hydrometeorology. They find that physics configurations result in more variance in simulated hydrometeorological conditions, and that meteorological forcing has a smaller impact. This type of sensitivity study is important to understanding where and how to focus further model development and observational field campaigns (as the authors note), and this particular study evaluates some sensitivities that I have not previously seen addressed. In my view, this has the potential to be a highly valuable contribution, but I believe it could use some sharpening of its framing, earlier recognition of the problems across all model configurations with respect to streamflow simulation, and more quantitative comparisons of some of the results.
Major comments
One major comment is around framing: at times, the authors imply that these results show an optimal IPM configuration but this is never clearly evaluated. At other times, the authors note that validation against observations is not a major goal of this study – in which case, it cannot indicate an optimal IPM configuration. My recommendation is to avoid implying that an optimal configuration is identified here. On a related note, I think the poor simulation of streamflow by the IPM should be mentioned earlier (perhaps in the abstract) – while it’s ok that this is the case, having this result buried in Figure 7 felt a bit deceptive.
Similarly, the authors refer in the introduction to recent arguments by Lundquist that models may be outperforming observations. In my view, they then miss a relatively easy opportunity to contribute to this debate: adding a ParFlow-CLM run forced by PRISM and reporting the results in Figure 7 would provide a case study testing whether meteorological models or observations are indeed more accurate in this case (assuming we basically believe that ParFlow-CLM is not biasing the results so much as to invert this response). I’m loathe to be the reviewer who suggests the authors do a different study than the one they have done – but in this case, the introduction led in this direction, and one additional simulation would significantly enhance the value of the present work.
Finally, it would have been useful to see more quantitative model evaluation, and some description of model evaluation in the methods. I had two specific concerns about the identification of BSU-CFSR2 as the “best” model and that used for the topographic radiation evaluation. First, I didn’t see a quantitative evaluation of models against PRISM to make this evaluation. Why not report an NSE or RMSE? Second, given the idea that PRISM is not necessarily more accurate than WRF, I’m not sure how important PRISM is as a benchmark here. Could you analyze the impact of topographic shading for two model configurations with very different results? Assuming you find similar results for a different configuration, it would just be helpful to have a sentence confirming that evaluating topographic results in a different WRF configuration had similar results.
Minor comments
Line 21 – Based on only the abstract, it’s not clear to me how the “spatiotemporal variance in simulated hydrometeorological conditions” is defined. I think you mean the model response varies more across the model structure options than meteorologic – but from this sentence, another possible interpretation is that spatiotemporal variance itself (e.g., the variance of some response variable across grid cells) is greater in certain physics schemes. Is there a way to avoid this ambiguity?
Line 27 – The conclusion that these findings provide guidance on the most accurate IPM was a bit of a jump from the prior sentences, which just described model sensitivity. To justify this, it would be better to describe what analysis supports this guidance (a calibration, I presume? Against what variables?). Alternatively, your concluding point could note that these sensitivity analyses show where more effort should be focused to constrain our process-based understanding.
Line 34 – remove “that”
Line 35 – “may have”? Could you express the reason this is stated with uncertainty?
Line 44 – Is “relevant” here meaning for larger-scales? Or respective relevant scales for each process?
Line 46 - Tying the motivation for this article to recent discussions about the relative skill of process-based atmospheric models vs gridded interpolated datasets provides a great motivation for the present study.
Line 58 – “To further compound…” I think this is a good point, but could you provide an example?
Line 68 – I’m a little uncomfortable with “properly-configured” unless you feel this analysis truly fixes equifinality issues. Maybe “appropriately-configured”?
Line 120 – “We can establish” leaves the reader uncertain if you did this or not.
Line 121-126 – This motivation is very nicely stated (although I don’t think it’s a hypothesis in the context of this study) – could you state this explicitly in the abstract?
Line 127 – Is “observations” here meant to refer to gridded reanalysis products? As the Lundquist paper points out, those are also models (generally statistical interpolations), so I’d suggest another word. I also note that this section doesn’t say anything about identifying an optimal model configuration, which is an outcome highlighted in the abstract.
Line 141 – “representative” is a bit of a tough argument to make – consider “similar to many other basins in…”
Line 141 – “near” should be “nearly”
Line 153 – You noted a lack of observations earlier, which disconnects somewhat with the “heavily-instrumented” claim here. I think this could be mitigated by noting that the instrumentation is intense at this site, but it’s extremely difficult to observe many processes with high accuracy at relevant scales.
Figure 1- As I read through the rest of the paper, I found I needed a more detailed study area map for the ERW specifically – with elevation and streamlines, perhaps?
Line 254 – I have trouble understanding why PRISM was used to assess model performance for meteorological fields, given the comments in the Lundquist et al. (2019) paper you cited. It seems fine to compare against PRISM, but perhaps not to “assess model performance.”
Line 266 – Could you note the spatial resolution of the ASO product used here? At 50 m, point-to-grid errors could be one reason for the apparent underestimation by ASO relative to SNOTEL.
Line 272 - Results section would be easier to follow with if subheadings were included.
Figure 3 caption – would read more easily if you noted a-c in your descriptions of which variables are identified. The statistics used to evaluate these differences are essentially introduced in this caption; could you move that to the methods?
Figure 3 – I’m surprised the UCD configurations melt so much earlier when they don’t appear to be warmer. Is it possible that the spatial averages here obscure spatial differences that would explain why the UCD simulations melt earlier? Figure S-4 kind of gets at this, but I think it needs more interpretation for the reader.
Line 319 – run-on sentence.
Figure 4 – Nice figure. Could you again add an introduction to these statistics in the methods so we know how you’re evaluating variance earlier? Why do c and d have only two points marked on the x-axis?
Line 335 – Were there any quantitative statistics provided to determine that BSU-CFSR2 agreed best with PRISM?
Figure 6 – Some panels appear not to use their full color scale (e.g., Temperature). Is that due to outlier pixels? There’s a lot of wasted white-space in these maps – why not use the full plotting area for each map?
Line 380 – This paragraph describes Figure 7, but the next paragraph also seems to introduce Figure 7 as though it’s a new topic?
Line 401 – “The objective of this study is not to replicate the observations…” In that case, I strongly recommend changing the final sentence in the abstract, because that implies you’re identifying the best model configuration.
Line 413 – Are the differences notable or minimal? I would say minimal. Maybe better to describe quantitatively – you could note the among-model variance vs the seasonal variance?
Line 417 – “are slightly larger…” The differences are twice as big for the subgrid-scale physics schemes but are small in both cases; I would suggest rephrasing to clarify.
Line 420 – What is meant by “more muted-nature”? I think this sentence speculating about differences in groundwater signals across years would be better in the discussion.
Figure 8 – Is this color gradient perceptually uniform? It appears not to be (e.g., see Figure 1b in Cramer et al., 2020). It would be helpful to see a perceptually uniform palette here if possible.
Line 448 – “with an eye towards how to represent…” Without calibration or serious validation efforts, I don’t think this study tells us about how to represent these interactions in models. I do think it tells us about where the most important uncertainties are, though (in your next sentence).
Line 454 – I don’t remember a prior discussion of boundary conditions – is this referring to boundary conditions at the land surface driven by differences in the subgrid-scale physics schemes?
Line 456 – This would be more convincing if statistics on BSU-CFSR2 vs other models were presented. How does identifying this configuration allow researchers to prioritize process studies and observational constraints? What would these be, specifically?
Figure S6 – Could you use a different color scheme that doesn’t have a diverging gradient? I think the diverging gradient is most appropriate for your maps showing differences (e.g., value scales that center on zero).
Line 467 – “Latent heat is posited…” by whom? Are you? I think you could state with more confidence than “posit” that other energy balance components (including but not exclusively latent) mediate the influence of shortwave spatial variability on temperature spatial variability.
Line 470 – You lost me here. This paragraph is ostensibly about how terrain shading algorithms affect radiation flux? How does this affect our ability to extrapolate findings from one mountainous watershed to another? The multiple “if” statements in here are also a little confusing – did the present study show these things or not?
References
Crameri, F., Shephard, G.E. & Heron, P.J. The misuse of colour in science communication. Nat Commun 11, 5444 (2020). https://doi.org/10.1038/s41467-020-19160-7
Citation: https://doi.org/10.5194/egusphere-2022-437-RC1 - AC1: 'Reply on RC1', Zexuan Xu, 29 Sep 2022
- AC4: 'Reply on RC1 After Revision', Zexuan Xu, 09 Jan 2023
-
RC2: 'Comment on egusphere-2022-437', Anonymous Referee #2, 06 Aug 2022
The integrated process model (IPM) which resolves the processes extending from the atmosphere through the bedrock is a hot topic in recent years. Using the IPM, researchers try to investigate the interactions between atmosphere and underground hydrological processes (e.g., lateral flows, groundwater dynamics), which used to be neglected by traditional meteorological modeling works. The ParFlow-CLM model is also a famous tool that couples the one-dimensional and sophisticated land surface model (CLM) with the three-dimensional groundwater model (ParFlow). Xu et al. Tested the sensitive of some hydrometeorological variables, which were simulated by the WRF model coupled with an integrated hydrologic model, to the choices of physical parameterizations, meteorological forcings that provide lateral boundary conditions, and terrain shading options. The author found that, physical parameterizations contributes to the largest spatial temporal variance in simulating the temperature, precipitation and other related hydrological variables. Although the topic is important and the introduction is well written, I still think the innovation is not strong and the manuscript needs major revision. My major concerns are below:
- The author emphasize the necessity and importance of IPM in the introduction and also take the IPM as one of the innovation of this research. However, the simulation work is based on one-way coupling (use the WRF simulated meteorological forcings to drive ParFlow-CLM). Whether this one-way coupling can be called as IPM is confused, as there is no feedback between meteorology and the underground hydrology.
- Another issue is that the finding that “physical parameterization is much more important than lateral or initial conditions”has been revealed by numerous works in meteorological discipline. For example, Solman and Pessacg (2012) found that the largest spread among WRF ensemble simulation members is caused by different combinations of physical parameterizations. Pohl et al (2011) tested the uncertainties of WRF simulation caused by physical parameterizations, lateral forcings, domain geometry. And they also suggested that physical parameterizations have the largest influence on precipitation. So, from the perspective of meteorology, the current finding is not surprising. The author should review the previous works and rethink the added value of the current work.
- Since the ERW is a heavily-instrumented catchment with a growing atmosphere-through-bedrock observation network (emphasized in abstract) and the “The goal of this work is to provide the mountain hydrology research community with a properly-configured IPM that can inform ongoing and future field campaigns and their process-modeling needs in the UCRB.”, why don’t you use the in-situ observations to evaluation the T2m and precipitation.
- Moreover, I am really confused about the use of Parflow-CLM here. Is it used to only provide streamflow and groundwater storage? The simulated snow and ET are provided by CLM-Parflow or the default land surface model in WRF? Actually, the Parflow-CLM is often used to investigate the potential influence of three-dimensional groundwater on the responses of terrestrial hydrological processes to meteorological forcings (e.g., numerous high impact works performed by Maxwell and Condon). However, here, I did not see what will be different if we used the traditional one-dimensional land surface model to investigate the same issue. I suggested the author to compare the difference when using the results from default WRF land surface simulation and that from Parflow-CLM (such as ET, total water in the soil column). This may help enhance the innovation of current work.
- The experimental design needs more detailed information. I suggest the author to provide more introduction to the experimental design. For example, why do you only use the CFSR2 and ERA5 in the UCD and NCAR simulation? Why does the no3DRad_inner radiation scheme is only used in BSU_CFSR2 and BSU_ERA5?
- I suggest to show the topography of the inner domain in Figure 1 which will be helpful to better understand the influence of 3D-radiative scheme. Currently, I am confused why the valley gets more radiation after considering the topographic shading and slope effect.
- Moreover, the author should proofread the manuscript. For example:
- The Figure S-4 in L 299 should be Figure S1?
- “Figure S-3 and Figure S-4”should be “Figure S-3”?
- There is no description or analysis on the Figure 8c-8d.
- I also noticed some grids are masked out in Fig S-5 and Fig S-6, but no interpretation is given.
Reference:
Solman and Pessacg. (2012). Evaluating uncertainties in regional climate simulations over South America at the seasonal scale.
Pohl et al. (2011). Testing WRF capability in simulating the atmospheric water cycle over Equatorial East Africa
Citation: https://doi.org/10.5194/egusphere-2022-437-RC2 -
AC2: 'Reply on RC2', Zexuan Xu, 29 Sep 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-437/egusphere-2022-437-AC2-supplement.pdf
- AC5: 'Reply on RC2 After Revision', Zexuan Xu, 09 Jan 2023
-
RC3: 'Comment on egusphere-2022-437', Anonymous Referee #3, 07 Aug 2022
This work presents a model study in a headwaters catchment in the Upper Colorado Watershed. In general the work is interesting and well-written but the presentation is somewhat confusing. There are some major points I think the authors need to address before the work's suitability for publication can be assessed. They are detailed below.
General Comments:
-The terminology of so-called IMP's used is confusing along with the reference to these as coupled models, which I would argue they are not. It's very confusing to discuss this work in the framework of different models as opposed to just forcing used to drive the hydrologic model. The language around the different products used is very confusing and makes much of the discussion hard to follow. Some of the meterological forcing datasets appear to be used to drive the models direclty, but in the introduction it appears that only WRF simulations are used to drive models. A completely re-write of this entire section is needed to make this clear. What did the authors do with the hydrologic outputs from the WRF simulations? Why are the Noah and Noah-MP models used interchangeably but the results are not compared to ParFlow-CLM? Except for groundwater which isn't discussed very much in the manuscript, all the same results should be in the WRF simulations. Why didn't the authors just run the WRF-ParFlow model or even mention it's existence? They talk about everything in a coupled sense but the models are in no way formally coupled (unless something is happening that is not discussed in the manuscript); the output files from WRF simulations are saved and somehow reformatted (this is not clear) and used to drive ParFlow-CLM. They could drive any hydrologic model and it wouldn't be considered a coupled platform, likewise the standard forcing products the authors might choose to drive the simulations off the shelf are also generated with atmospheric models, yet I would never think of this as an IMP. I suggest the authours are much more transparent about this aspect and remove the terminology from a revision. They should also provide some clear language about what is actual being done here, is this a comparison between forcing generated with WRF v other approaches? Why didn't the authors just run forced by PRISM?
-Coupled v uncoupled processes and feedbacks. There has been a lot of work to understand the role of feedbacks between two-way coupled hydrologic models and atmospheric models. Examples include WRF-Hydro-WRF (e.g. Arnault 2016), COSMO-CLM-Parflow (e.g. Keune 2016, 2019), WRF-ParFlow (e.g. Maxwell 2012, Forrester 2020), feedbacks over complex terrain (Ban 2014), and other more conceptual approaches (e.g. Miguez-Macho 2007). This is not an exhustive list, but demonstrates that much work has been done to study these feedbacks, Some of these studies are in complex terrain and even suggest that the approach used by the authors may not be valid at high resolution without lateral flow. These studies all systematically compare different types of model physics (e.g. free drainage, the standalone atmospheric model, fully coupled system) and use varying metrics to diagnose coupling strength and changes in the atmosphere. I suggest the authors read these prior studies carefully and develop a new section that summarizes (rather than ignores the existence of) this body of work and uses this to put the current study in context. This will help frame the current work and help it look much less like a patchwork of runs that are loosely tied together. This will also help clarify my point above, to help the reader follow what is being done and what runs are conducted in the current work.
-Variability in point processes compared to integrated or averaged measures. I mention this as a specific instance below, but it is also a general point, there are instances where the authors present differences locally (at a point) that do not persist synoptically. Do the different forcing products or microphysics (I think this is the point the authors make) make some difference locally for e.g. precip, radiation, but does some averaged quantity remain unaffected. It appears this is the case for much of the analysis. That is, topographic shading makes a difference locally in LH flux but the domain averaged LH flux remains unchanged between cases. The authors draw one conclusion (local differences) without acknowledging the other (same net energy flux over the domain).
-Atmospheric uncertainty. There has been much work on differences in model physics in a model such as WRF that allows different physical parameterizations to be "swapped out" easily in simulations by changing the namelist. This is an important aspect of uncertainty, but it is almost always put in the context of one of the major forms of uncertainty in the atmosphere, propogation of intial conditions. One should always determine that such a physics change is robust using (e.g.) time-shifted uncertainty in an ensemble type approach (e.g. Walser 2004). Often upon inclusion of uncertainty in the intial model state (in the atmosphere) the differences in physical paramterization no longer dominate.
-Can the authors compare meterological forcings at the site? A heavily-instrumented catchment (abstract line 16-) should have observations of meterological variables and snow outside of the SNOTEL (which I don't think are used for comparison and should contain precipitation and temperature), even preciptation and temperature at gage locations would be very instrumental. It appears that the authors treat the PRISM product like observationsm, which is an unfortunate and hopefully accidental. The PRISM product is a model, even if statistical, that takes into account observations in a region. One would assume that then PRISM is ingesting precip from the SNOTEL sites in the domain but this isn't stated (are there even any obsevations that PRISM is using and is it thus totally unconstrained?).
-The authors should compare to ET observations in Ryken et al 2022 to results of current work (both WRF and ParFlow-CLM). Additionally, it apears that the Ryken et al 2022 paper has meterological observations of precip, temperature and radation that might be useful to partly address my comment above.
Specific comments
line 65: is PF-CLM being cited using Maxwell et al 2015 (cited on line 617)? That paper references a simulation over large scale that as I read it is forced externally and does not use or describe the CLM model.
Line 240+ This section describes the PF-CLM model in general but I could not find specifics for the model domain used in this study? What is the resolution or model configuration for the PF-CLM domain? How deep is the subsurface? What is the lateral resolution? How was this matched to the forcing datasets or the WRF outputs? Was there a balance of water and fluxes between the grids? How were model parameters determined? Are there references to prior work on this model? Calibration? If not the authors might include a description of these aspects in the current manuscript and as supplemental material.
Lines 275-281. The UCD datasets appear to have the most precip but the ear
Figure 3 caption (~line 305): a, b, c are used to identify plots in the figure but are not used in the caption. Also, it does not appear that 3c is described in the caption.
Figures 5, 6 and associated discussion. An interesting point that might be made here is that while local spatial differences are apparent in Figure 6, the domain averages (even for SWE) are the same between shaded and non-shaded formulations. This suggests that while it may be striking visually to include shading, the upscaled water balance for the catchment isn't sensitive.
lines 383- I'm not sure I agree with these conclusions. While the cumulative variability in outflow resulting from the different forcing products creates different cumulative outflows, Figure 7a indicates that there is no difference in timing across all the forcing datasets. My suspicion is that the differences in outflow are due to total water quantity (Figure 5a suggests this as well) and are simply a precip bias artifact in the different WRF runs.
line 443: "Here, we ... coupling WRF and ParFlow" rephrase, this sentence isn't correct, the models were not coupled
line 476+: This text appears to acknowledge the lack of coupling in the current work (as an aside, what is "one-way coupled" this is not actually coupled at all, as it appears the results from WRF were simply used as forcing for the PF-CLM model. This isn't bad, but as mentioned above should be discussed up front. The arguments here regarding computational expense as an excuse for not running coupled simualtions are incorrect, prior studies have shown with the e.g. WRF-PF model that ParFlow is approximately 1% of the total computational time compared to WRF which is 99% of the computational time. Thus if the authors ran WRF for this domain, the additional expense to run with WRF-PF is a negligible increase in cost. Also the authors might want to correclty identify that the Forrester et al study (line 483) was run with WRF-PF and the authors might want to read and cite Forrester 2020 which discusses limitations of running high resolution, uncoupled WRF simulations in mountain terriain (the CO headwaters was studied0 where the lack of lateral flow caused changes in the surface energy budget and hight of the boundary layer.
line 498: Is the watershed highly instrumented now or will it be? This seems at odds with statement in the abstract (line 16)?
referencesArnault, J., Wagner, S., Rummler, T., Fersch, B., Bliefernicht, J., Andresen, S., & Kunstmann, H, 2016: Role of Runoff–Infiltration Partitioning and Resolved Overland Flow on Land–Atmosphere Feedbacks: A Case Study with the WRF-Hydro Coupled Modeling System for West Africa, Journal of Hydrometeorology
Ban, N., Schmidli, J., and Schär, C. 2014: Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations, Journal Geophyscal Research Atmosphere
Forrester, M. and Maxwell, R. 2020: Impact of lateral groundwater flow and subsurface lower boundary conditions on atmospheric boundary layer development over complex terrain, Journal of Hydrometeorology
Frei, C. and Schär, C. 1998: A precipitation climatology of the Alps from high-resolution rain-gauge observations. Inter ational Journal of Climatology,
Keune, J., Gasper, F., Goergen, K., Hense, A., Shrestha, P., Sulis, M. and Kollet, S. 2016: Studying the influence of groundwater representations on land surface-atmosphere feedbacks during the European heat wave in 2003, Journal of Geophysical Research - Atmospheres
Keune, J., Sulis, M., and Kollet, S. J. 2019: Potential added value of incorporating human water use on the simulation of evapotranspiration and precipitation in a continentalâscale bedrockâtoâatmosphere modeling system–A validation study considering observational uncertainty. Journal of Advances in Modeling Earth Systems
Miguez-Macho, G., Y. Fan, C. P. Weaver, R. Walko, and A. Robock, 2007: Incorporating water table dynamics in climate modeling: 2. Formulation, validation, and soil moisture simulation. Journal of Geophysical Research
Maxwell, R., J. K. Lundquist, J. D. Mirocha, S. G. Smith, C. S. Woodward, and A. F. B. Tompson, 2011: Development of a coupled groundwater–atmosphere model, Monthly Weather Review
Ryken, A. C., Gochis, D., & Maxwell, R. 2022: Unravelling groundwater contributions to evapotranspiration and constraining water fluxes in a high-elevation catchment. Hydrological Processes
Walser, A., and C. Schaer, 2004: Convection-resolving precipitation forecasting and its predictability in Alpine river catchments, Journal of Hydrology
Citation: https://doi.org/10.5194/egusphere-2022-437-RC3 -
AC3: 'Reply on RC3', Zexuan Xu, 29 Sep 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-437/egusphere-2022-437-AC3-supplement.pdf
- AC6: 'Reply on RC3 After Revision', Zexuan Xu, 09 Jan 2023
-
AC3: 'Reply on RC3', Zexuan Xu, 29 Sep 2022
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-437', Anonymous Referee #1, 01 Aug 2022
Referee comments on:
Title: Sensitivities of subgrid-scale physics schemes, meteorological forcing, and topographic radiation in atmosphere-through-bedrock integrated process models: A case study in the Upper Colorado River Basin
Author(s): Zexuan Xu et al.
MS No.: egusphere-2022-437
MS type: Research articleSummary
This study evaluates the influence of different meteorologic forcing (based on different reanalysis datasets), subgrid-scale physics schemes, and terrain shading on simulated hydrometeorology. They find that physics configurations result in more variance in simulated hydrometeorological conditions, and that meteorological forcing has a smaller impact. This type of sensitivity study is important to understanding where and how to focus further model development and observational field campaigns (as the authors note), and this particular study evaluates some sensitivities that I have not previously seen addressed. In my view, this has the potential to be a highly valuable contribution, but I believe it could use some sharpening of its framing, earlier recognition of the problems across all model configurations with respect to streamflow simulation, and more quantitative comparisons of some of the results.
Major comments
One major comment is around framing: at times, the authors imply that these results show an optimal IPM configuration but this is never clearly evaluated. At other times, the authors note that validation against observations is not a major goal of this study – in which case, it cannot indicate an optimal IPM configuration. My recommendation is to avoid implying that an optimal configuration is identified here. On a related note, I think the poor simulation of streamflow by the IPM should be mentioned earlier (perhaps in the abstract) – while it’s ok that this is the case, having this result buried in Figure 7 felt a bit deceptive.
Similarly, the authors refer in the introduction to recent arguments by Lundquist that models may be outperforming observations. In my view, they then miss a relatively easy opportunity to contribute to this debate: adding a ParFlow-CLM run forced by PRISM and reporting the results in Figure 7 would provide a case study testing whether meteorological models or observations are indeed more accurate in this case (assuming we basically believe that ParFlow-CLM is not biasing the results so much as to invert this response). I’m loathe to be the reviewer who suggests the authors do a different study than the one they have done – but in this case, the introduction led in this direction, and one additional simulation would significantly enhance the value of the present work.
Finally, it would have been useful to see more quantitative model evaluation, and some description of model evaluation in the methods. I had two specific concerns about the identification of BSU-CFSR2 as the “best” model and that used for the topographic radiation evaluation. First, I didn’t see a quantitative evaluation of models against PRISM to make this evaluation. Why not report an NSE or RMSE? Second, given the idea that PRISM is not necessarily more accurate than WRF, I’m not sure how important PRISM is as a benchmark here. Could you analyze the impact of topographic shading for two model configurations with very different results? Assuming you find similar results for a different configuration, it would just be helpful to have a sentence confirming that evaluating topographic results in a different WRF configuration had similar results.
Minor comments
Line 21 – Based on only the abstract, it’s not clear to me how the “spatiotemporal variance in simulated hydrometeorological conditions” is defined. I think you mean the model response varies more across the model structure options than meteorologic – but from this sentence, another possible interpretation is that spatiotemporal variance itself (e.g., the variance of some response variable across grid cells) is greater in certain physics schemes. Is there a way to avoid this ambiguity?
Line 27 – The conclusion that these findings provide guidance on the most accurate IPM was a bit of a jump from the prior sentences, which just described model sensitivity. To justify this, it would be better to describe what analysis supports this guidance (a calibration, I presume? Against what variables?). Alternatively, your concluding point could note that these sensitivity analyses show where more effort should be focused to constrain our process-based understanding.
Line 34 – remove “that”
Line 35 – “may have”? Could you express the reason this is stated with uncertainty?
Line 44 – Is “relevant” here meaning for larger-scales? Or respective relevant scales for each process?
Line 46 - Tying the motivation for this article to recent discussions about the relative skill of process-based atmospheric models vs gridded interpolated datasets provides a great motivation for the present study.
Line 58 – “To further compound…” I think this is a good point, but could you provide an example?
Line 68 – I’m a little uncomfortable with “properly-configured” unless you feel this analysis truly fixes equifinality issues. Maybe “appropriately-configured”?
Line 120 – “We can establish” leaves the reader uncertain if you did this or not.
Line 121-126 – This motivation is very nicely stated (although I don’t think it’s a hypothesis in the context of this study) – could you state this explicitly in the abstract?
Line 127 – Is “observations” here meant to refer to gridded reanalysis products? As the Lundquist paper points out, those are also models (generally statistical interpolations), so I’d suggest another word. I also note that this section doesn’t say anything about identifying an optimal model configuration, which is an outcome highlighted in the abstract.
Line 141 – “representative” is a bit of a tough argument to make – consider “similar to many other basins in…”
Line 141 – “near” should be “nearly”
Line 153 – You noted a lack of observations earlier, which disconnects somewhat with the “heavily-instrumented” claim here. I think this could be mitigated by noting that the instrumentation is intense at this site, but it’s extremely difficult to observe many processes with high accuracy at relevant scales.
Figure 1- As I read through the rest of the paper, I found I needed a more detailed study area map for the ERW specifically – with elevation and streamlines, perhaps?
Line 254 – I have trouble understanding why PRISM was used to assess model performance for meteorological fields, given the comments in the Lundquist et al. (2019) paper you cited. It seems fine to compare against PRISM, but perhaps not to “assess model performance.”
Line 266 – Could you note the spatial resolution of the ASO product used here? At 50 m, point-to-grid errors could be one reason for the apparent underestimation by ASO relative to SNOTEL.
Line 272 - Results section would be easier to follow with if subheadings were included.
Figure 3 caption – would read more easily if you noted a-c in your descriptions of which variables are identified. The statistics used to evaluate these differences are essentially introduced in this caption; could you move that to the methods?
Figure 3 – I’m surprised the UCD configurations melt so much earlier when they don’t appear to be warmer. Is it possible that the spatial averages here obscure spatial differences that would explain why the UCD simulations melt earlier? Figure S-4 kind of gets at this, but I think it needs more interpretation for the reader.
Line 319 – run-on sentence.
Figure 4 – Nice figure. Could you again add an introduction to these statistics in the methods so we know how you’re evaluating variance earlier? Why do c and d have only two points marked on the x-axis?
Line 335 – Were there any quantitative statistics provided to determine that BSU-CFSR2 agreed best with PRISM?
Figure 6 – Some panels appear not to use their full color scale (e.g., Temperature). Is that due to outlier pixels? There’s a lot of wasted white-space in these maps – why not use the full plotting area for each map?
Line 380 – This paragraph describes Figure 7, but the next paragraph also seems to introduce Figure 7 as though it’s a new topic?
Line 401 – “The objective of this study is not to replicate the observations…” In that case, I strongly recommend changing the final sentence in the abstract, because that implies you’re identifying the best model configuration.
Line 413 – Are the differences notable or minimal? I would say minimal. Maybe better to describe quantitatively – you could note the among-model variance vs the seasonal variance?
Line 417 – “are slightly larger…” The differences are twice as big for the subgrid-scale physics schemes but are small in both cases; I would suggest rephrasing to clarify.
Line 420 – What is meant by “more muted-nature”? I think this sentence speculating about differences in groundwater signals across years would be better in the discussion.
Figure 8 – Is this color gradient perceptually uniform? It appears not to be (e.g., see Figure 1b in Cramer et al., 2020). It would be helpful to see a perceptually uniform palette here if possible.
Line 448 – “with an eye towards how to represent…” Without calibration or serious validation efforts, I don’t think this study tells us about how to represent these interactions in models. I do think it tells us about where the most important uncertainties are, though (in your next sentence).
Line 454 – I don’t remember a prior discussion of boundary conditions – is this referring to boundary conditions at the land surface driven by differences in the subgrid-scale physics schemes?
Line 456 – This would be more convincing if statistics on BSU-CFSR2 vs other models were presented. How does identifying this configuration allow researchers to prioritize process studies and observational constraints? What would these be, specifically?
Figure S6 – Could you use a different color scheme that doesn’t have a diverging gradient? I think the diverging gradient is most appropriate for your maps showing differences (e.g., value scales that center on zero).
Line 467 – “Latent heat is posited…” by whom? Are you? I think you could state with more confidence than “posit” that other energy balance components (including but not exclusively latent) mediate the influence of shortwave spatial variability on temperature spatial variability.
Line 470 – You lost me here. This paragraph is ostensibly about how terrain shading algorithms affect radiation flux? How does this affect our ability to extrapolate findings from one mountainous watershed to another? The multiple “if” statements in here are also a little confusing – did the present study show these things or not?
References
Crameri, F., Shephard, G.E. & Heron, P.J. The misuse of colour in science communication. Nat Commun 11, 5444 (2020). https://doi.org/10.1038/s41467-020-19160-7
Citation: https://doi.org/10.5194/egusphere-2022-437-RC1 - AC1: 'Reply on RC1', Zexuan Xu, 29 Sep 2022
- AC4: 'Reply on RC1 After Revision', Zexuan Xu, 09 Jan 2023
-
RC2: 'Comment on egusphere-2022-437', Anonymous Referee #2, 06 Aug 2022
The integrated process model (IPM) which resolves the processes extending from the atmosphere through the bedrock is a hot topic in recent years. Using the IPM, researchers try to investigate the interactions between atmosphere and underground hydrological processes (e.g., lateral flows, groundwater dynamics), which used to be neglected by traditional meteorological modeling works. The ParFlow-CLM model is also a famous tool that couples the one-dimensional and sophisticated land surface model (CLM) with the three-dimensional groundwater model (ParFlow). Xu et al. Tested the sensitive of some hydrometeorological variables, which were simulated by the WRF model coupled with an integrated hydrologic model, to the choices of physical parameterizations, meteorological forcings that provide lateral boundary conditions, and terrain shading options. The author found that, physical parameterizations contributes to the largest spatial temporal variance in simulating the temperature, precipitation and other related hydrological variables. Although the topic is important and the introduction is well written, I still think the innovation is not strong and the manuscript needs major revision. My major concerns are below:
- The author emphasize the necessity and importance of IPM in the introduction and also take the IPM as one of the innovation of this research. However, the simulation work is based on one-way coupling (use the WRF simulated meteorological forcings to drive ParFlow-CLM). Whether this one-way coupling can be called as IPM is confused, as there is no feedback between meteorology and the underground hydrology.
- Another issue is that the finding that “physical parameterization is much more important than lateral or initial conditions”has been revealed by numerous works in meteorological discipline. For example, Solman and Pessacg (2012) found that the largest spread among WRF ensemble simulation members is caused by different combinations of physical parameterizations. Pohl et al (2011) tested the uncertainties of WRF simulation caused by physical parameterizations, lateral forcings, domain geometry. And they also suggested that physical parameterizations have the largest influence on precipitation. So, from the perspective of meteorology, the current finding is not surprising. The author should review the previous works and rethink the added value of the current work.
- Since the ERW is a heavily-instrumented catchment with a growing atmosphere-through-bedrock observation network (emphasized in abstract) and the “The goal of this work is to provide the mountain hydrology research community with a properly-configured IPM that can inform ongoing and future field campaigns and their process-modeling needs in the UCRB.”, why don’t you use the in-situ observations to evaluation the T2m and precipitation.
- Moreover, I am really confused about the use of Parflow-CLM here. Is it used to only provide streamflow and groundwater storage? The simulated snow and ET are provided by CLM-Parflow or the default land surface model in WRF? Actually, the Parflow-CLM is often used to investigate the potential influence of three-dimensional groundwater on the responses of terrestrial hydrological processes to meteorological forcings (e.g., numerous high impact works performed by Maxwell and Condon). However, here, I did not see what will be different if we used the traditional one-dimensional land surface model to investigate the same issue. I suggested the author to compare the difference when using the results from default WRF land surface simulation and that from Parflow-CLM (such as ET, total water in the soil column). This may help enhance the innovation of current work.
- The experimental design needs more detailed information. I suggest the author to provide more introduction to the experimental design. For example, why do you only use the CFSR2 and ERA5 in the UCD and NCAR simulation? Why does the no3DRad_inner radiation scheme is only used in BSU_CFSR2 and BSU_ERA5?
- I suggest to show the topography of the inner domain in Figure 1 which will be helpful to better understand the influence of 3D-radiative scheme. Currently, I am confused why the valley gets more radiation after considering the topographic shading and slope effect.
- Moreover, the author should proofread the manuscript. For example:
- The Figure S-4 in L 299 should be Figure S1?
- “Figure S-3 and Figure S-4”should be “Figure S-3”?
- There is no description or analysis on the Figure 8c-8d.
- I also noticed some grids are masked out in Fig S-5 and Fig S-6, but no interpretation is given.
Reference:
Solman and Pessacg. (2012). Evaluating uncertainties in regional climate simulations over South America at the seasonal scale.
Pohl et al. (2011). Testing WRF capability in simulating the atmospheric water cycle over Equatorial East Africa
Citation: https://doi.org/10.5194/egusphere-2022-437-RC2 -
AC2: 'Reply on RC2', Zexuan Xu, 29 Sep 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-437/egusphere-2022-437-AC2-supplement.pdf
- AC5: 'Reply on RC2 After Revision', Zexuan Xu, 09 Jan 2023
-
RC3: 'Comment on egusphere-2022-437', Anonymous Referee #3, 07 Aug 2022
This work presents a model study in a headwaters catchment in the Upper Colorado Watershed. In general the work is interesting and well-written but the presentation is somewhat confusing. There are some major points I think the authors need to address before the work's suitability for publication can be assessed. They are detailed below.
General Comments:
-The terminology of so-called IMP's used is confusing along with the reference to these as coupled models, which I would argue they are not. It's very confusing to discuss this work in the framework of different models as opposed to just forcing used to drive the hydrologic model. The language around the different products used is very confusing and makes much of the discussion hard to follow. Some of the meterological forcing datasets appear to be used to drive the models direclty, but in the introduction it appears that only WRF simulations are used to drive models. A completely re-write of this entire section is needed to make this clear. What did the authors do with the hydrologic outputs from the WRF simulations? Why are the Noah and Noah-MP models used interchangeably but the results are not compared to ParFlow-CLM? Except for groundwater which isn't discussed very much in the manuscript, all the same results should be in the WRF simulations. Why didn't the authors just run the WRF-ParFlow model or even mention it's existence? They talk about everything in a coupled sense but the models are in no way formally coupled (unless something is happening that is not discussed in the manuscript); the output files from WRF simulations are saved and somehow reformatted (this is not clear) and used to drive ParFlow-CLM. They could drive any hydrologic model and it wouldn't be considered a coupled platform, likewise the standard forcing products the authors might choose to drive the simulations off the shelf are also generated with atmospheric models, yet I would never think of this as an IMP. I suggest the authours are much more transparent about this aspect and remove the terminology from a revision. They should also provide some clear language about what is actual being done here, is this a comparison between forcing generated with WRF v other approaches? Why didn't the authors just run forced by PRISM?
-Coupled v uncoupled processes and feedbacks. There has been a lot of work to understand the role of feedbacks between two-way coupled hydrologic models and atmospheric models. Examples include WRF-Hydro-WRF (e.g. Arnault 2016), COSMO-CLM-Parflow (e.g. Keune 2016, 2019), WRF-ParFlow (e.g. Maxwell 2012, Forrester 2020), feedbacks over complex terrain (Ban 2014), and other more conceptual approaches (e.g. Miguez-Macho 2007). This is not an exhustive list, but demonstrates that much work has been done to study these feedbacks, Some of these studies are in complex terrain and even suggest that the approach used by the authors may not be valid at high resolution without lateral flow. These studies all systematically compare different types of model physics (e.g. free drainage, the standalone atmospheric model, fully coupled system) and use varying metrics to diagnose coupling strength and changes in the atmosphere. I suggest the authors read these prior studies carefully and develop a new section that summarizes (rather than ignores the existence of) this body of work and uses this to put the current study in context. This will help frame the current work and help it look much less like a patchwork of runs that are loosely tied together. This will also help clarify my point above, to help the reader follow what is being done and what runs are conducted in the current work.
-Variability in point processes compared to integrated or averaged measures. I mention this as a specific instance below, but it is also a general point, there are instances where the authors present differences locally (at a point) that do not persist synoptically. Do the different forcing products or microphysics (I think this is the point the authors make) make some difference locally for e.g. precip, radiation, but does some averaged quantity remain unaffected. It appears this is the case for much of the analysis. That is, topographic shading makes a difference locally in LH flux but the domain averaged LH flux remains unchanged between cases. The authors draw one conclusion (local differences) without acknowledging the other (same net energy flux over the domain).
-Atmospheric uncertainty. There has been much work on differences in model physics in a model such as WRF that allows different physical parameterizations to be "swapped out" easily in simulations by changing the namelist. This is an important aspect of uncertainty, but it is almost always put in the context of one of the major forms of uncertainty in the atmosphere, propogation of intial conditions. One should always determine that such a physics change is robust using (e.g.) time-shifted uncertainty in an ensemble type approach (e.g. Walser 2004). Often upon inclusion of uncertainty in the intial model state (in the atmosphere) the differences in physical paramterization no longer dominate.
-Can the authors compare meterological forcings at the site? A heavily-instrumented catchment (abstract line 16-) should have observations of meterological variables and snow outside of the SNOTEL (which I don't think are used for comparison and should contain precipitation and temperature), even preciptation and temperature at gage locations would be very instrumental. It appears that the authors treat the PRISM product like observationsm, which is an unfortunate and hopefully accidental. The PRISM product is a model, even if statistical, that takes into account observations in a region. One would assume that then PRISM is ingesting precip from the SNOTEL sites in the domain but this isn't stated (are there even any obsevations that PRISM is using and is it thus totally unconstrained?).
-The authors should compare to ET observations in Ryken et al 2022 to results of current work (both WRF and ParFlow-CLM). Additionally, it apears that the Ryken et al 2022 paper has meterological observations of precip, temperature and radation that might be useful to partly address my comment above.
Specific comments
line 65: is PF-CLM being cited using Maxwell et al 2015 (cited on line 617)? That paper references a simulation over large scale that as I read it is forced externally and does not use or describe the CLM model.
Line 240+ This section describes the PF-CLM model in general but I could not find specifics for the model domain used in this study? What is the resolution or model configuration for the PF-CLM domain? How deep is the subsurface? What is the lateral resolution? How was this matched to the forcing datasets or the WRF outputs? Was there a balance of water and fluxes between the grids? How were model parameters determined? Are there references to prior work on this model? Calibration? If not the authors might include a description of these aspects in the current manuscript and as supplemental material.
Lines 275-281. The UCD datasets appear to have the most precip but the ear
Figure 3 caption (~line 305): a, b, c are used to identify plots in the figure but are not used in the caption. Also, it does not appear that 3c is described in the caption.
Figures 5, 6 and associated discussion. An interesting point that might be made here is that while local spatial differences are apparent in Figure 6, the domain averages (even for SWE) are the same between shaded and non-shaded formulations. This suggests that while it may be striking visually to include shading, the upscaled water balance for the catchment isn't sensitive.
lines 383- I'm not sure I agree with these conclusions. While the cumulative variability in outflow resulting from the different forcing products creates different cumulative outflows, Figure 7a indicates that there is no difference in timing across all the forcing datasets. My suspicion is that the differences in outflow are due to total water quantity (Figure 5a suggests this as well) and are simply a precip bias artifact in the different WRF runs.
line 443: "Here, we ... coupling WRF and ParFlow" rephrase, this sentence isn't correct, the models were not coupled
line 476+: This text appears to acknowledge the lack of coupling in the current work (as an aside, what is "one-way coupled" this is not actually coupled at all, as it appears the results from WRF were simply used as forcing for the PF-CLM model. This isn't bad, but as mentioned above should be discussed up front. The arguments here regarding computational expense as an excuse for not running coupled simualtions are incorrect, prior studies have shown with the e.g. WRF-PF model that ParFlow is approximately 1% of the total computational time compared to WRF which is 99% of the computational time. Thus if the authors ran WRF for this domain, the additional expense to run with WRF-PF is a negligible increase in cost. Also the authors might want to correclty identify that the Forrester et al study (line 483) was run with WRF-PF and the authors might want to read and cite Forrester 2020 which discusses limitations of running high resolution, uncoupled WRF simulations in mountain terriain (the CO headwaters was studied0 where the lack of lateral flow caused changes in the surface energy budget and hight of the boundary layer.
line 498: Is the watershed highly instrumented now or will it be? This seems at odds with statement in the abstract (line 16)?
referencesArnault, J., Wagner, S., Rummler, T., Fersch, B., Bliefernicht, J., Andresen, S., & Kunstmann, H, 2016: Role of Runoff–Infiltration Partitioning and Resolved Overland Flow on Land–Atmosphere Feedbacks: A Case Study with the WRF-Hydro Coupled Modeling System for West Africa, Journal of Hydrometeorology
Ban, N., Schmidli, J., and Schär, C. 2014: Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations, Journal Geophyscal Research Atmosphere
Forrester, M. and Maxwell, R. 2020: Impact of lateral groundwater flow and subsurface lower boundary conditions on atmospheric boundary layer development over complex terrain, Journal of Hydrometeorology
Frei, C. and Schär, C. 1998: A precipitation climatology of the Alps from high-resolution rain-gauge observations. Inter ational Journal of Climatology,
Keune, J., Gasper, F., Goergen, K., Hense, A., Shrestha, P., Sulis, M. and Kollet, S. 2016: Studying the influence of groundwater representations on land surface-atmosphere feedbacks during the European heat wave in 2003, Journal of Geophysical Research - Atmospheres
Keune, J., Sulis, M., and Kollet, S. J. 2019: Potential added value of incorporating human water use on the simulation of evapotranspiration and precipitation in a continentalâscale bedrockâtoâatmosphere modeling system–A validation study considering observational uncertainty. Journal of Advances in Modeling Earth Systems
Miguez-Macho, G., Y. Fan, C. P. Weaver, R. Walko, and A. Robock, 2007: Incorporating water table dynamics in climate modeling: 2. Formulation, validation, and soil moisture simulation. Journal of Geophysical Research
Maxwell, R., J. K. Lundquist, J. D. Mirocha, S. G. Smith, C. S. Woodward, and A. F. B. Tompson, 2011: Development of a coupled groundwater–atmosphere model, Monthly Weather Review
Ryken, A. C., Gochis, D., & Maxwell, R. 2022: Unravelling groundwater contributions to evapotranspiration and constraining water fluxes in a high-elevation catchment. Hydrological Processes
Walser, A., and C. Schaer, 2004: Convection-resolving precipitation forecasting and its predictability in Alpine river catchments, Journal of Hydrology
Citation: https://doi.org/10.5194/egusphere-2022-437-RC3 -
AC3: 'Reply on RC3', Zexuan Xu, 29 Sep 2022
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-437/egusphere-2022-437-AC3-supplement.pdf
- AC6: 'Reply on RC3 After Revision', Zexuan Xu, 09 Jan 2023
-
AC3: 'Reply on RC3', Zexuan Xu, 29 Sep 2022
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
466 | 257 | 23 | 746 | 50 | 6 | 9 |
- HTML: 466
- PDF: 257
- XML: 23
- Total: 746
- Supplement: 50
- BibTeX: 6
- EndNote: 9
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
2 citations as recorded by crossref.
- Impact of Seasonal Snow‐Cover Change on the Observed and Simulated State of the Atmospheric Boundary Layer in a High‐Altitude Mountain Valley B. Adler et al. 10.1029/2023JD038497
- Sensitivities of subgrid-scale physics schemes, meteorological forcing, and topographic radiation in atmosphere-through-bedrock integrated process models: a case study in the Upper Colorado River basin Z. Xu et al. 10.5194/hess-27-1771-2023
Erica R. Siirila-Woodburn
Alan M. Rhoades
Daniel Feldman
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5324 KB) - Metadata XML
-
Supplement
(2125 KB) - BibTeX
- EndNote
- Final revised paper