the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Assessing the Hydrological Impact Sensitivity to Climate Model Weighting Strategies
Abstract. Climate change impact studies rely on ensembles of General Circulation Model (GCM) simulations. Combining ensemble members is challenging due to uncertainties in how well each model performs. The concept of model democracy where equal weight is given to each model, is common but criticized for ignoring regional variations and dependencies between models. Various weighting schemes address these concerns, but their effectiveness in impact studies, which integrate GCM outputs with separate impact models, remains unclear.
This study evaluated the impact of six weighting strategies on future streamflow projections using a pseudo-reality approach, where each GCM is treated as “the true” climate. The analysis involved an ensemble of 22 CMIP6 climate simulations and used a hydrological model across 3,107 North American catchments. Since climate model outputs often undergo bias correction before being used in hydrological models, this study implemented two approaches: one with bias correction applied to precipitation and temperature inputs, and one without. Weighting schemes were evaluated based on biases relative to the pseudo-reality GCM for annual mean temperature, precipitation and streamflow.
Results show that unequal weighting schemes produce significantly better precipitation and temperature projections than equal weighting. For streamflow projections, unequal weighting offered minor improvement only when bias correction was not applied. However, with bias correction, both equal and unequal weighting delivered similar results. While bias correction has limitations, it remains essential for realistic streamflow projections in impact studies. A pragmatic strategy may be to combine model democracy with selective model exclusion based on robust performance metrics.
This study provides insights on how weighting affects hydrological assessments. It emphasizes the need for careful approaches and further research to manage uncertainties in climate change impact studies. These findings will help improve the accuracy of climate projections and improve the reliability of hydrological impact assessments in a changing climate.
- Preprint
(3423 KB) - Metadata XML
-
Supplement
(1561 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-1183', Anonymous Referee #1, 04 Jul 2024
Summary:
The manuscript titled "Assessing the Hydrological Impact Sensitivity to Climate Model Weighting Strategies" evaluates the impact of different weighting strategies for GCM outputs on streamflow projections. The study uses the concept of pseudo-reality to address the challenge of lacking future observations.Strengths:
1) The manuscript provides a comprehensive analysis of various weighting strategies, which could be helpful for further studies.
2) The use of pseudo-reality to overcome the lack of future observations can be useful and, if well-explained, could provide insights into model weighting strategies.Weaknesses:
1) However, the actual scientific contribution of the paper is difficult to assess as the research gap is not clearly defined. For example, a specific aim of the study is to investigate how the selection of assessment criteria influences the results of climate model weighting in hydrological assessments (line 125), while it seems that this question has been well discussed in the literature (lines 90-95). Although it is further mentioned that it is challenged by the lack of future observations and can potentially be addressed with pseudo-reality, the research questions are still not clear. What is the state of the art and what are the implications of the study regarding the use of pseudo-reality for weighting schemes?2) The rationale for using pseudo-reality to address the lack of future observations is not well-explained. More detailed justification and explanation of the hypothesis behind pseudo-reality are needed, and citations alone are not sufficient to justify the effectiveness of the practice in the context of this study. Also, are there alternatives to pseudo-reality and how can it be ensured that the study's conclusion about the effectiveness of weighting is not just a product of this particular technique? Some further discussion is needed.
3) The manuscript's presentation needs improvement. For example, some statements lack references (e.g. lines 100-105), and some paragraphs need to be more coherent (e.g. the last two paragraphs in the introduction are repetitive, where many aims/goals/objectives are presented). Also, some tables and figures are confusing. For example, the meaning of the ID number in Table 1 is unclear, and it appears just the one used in Figure 2. What is the order of the climate models in Figure 2? Is it just random? If so, the figure is not informative and I would suggest that it be ordered by ECS or some similar metric to make it meaningful.
Specific questions:
- Line 156: what specific statistical metric is being compared?
- Line 159: how is the choice of weighting model related to climate sensitivity here, i.e. why can the finding of different ECSs highlight the potential importance of the choice of weighting model? Please clarify.
- Lines 169-170: Please justify the operation. Why is this not done using the corresponding GCM data?
- For the HMETS model, no performance seems to be reported in the paper.
- Line 271: Yes, I agree, but how are the GCM meteorological forcing matched to the lumped hydrological model in the study, where the minimum basin size is 500 km2.
- Line 276: The statement is still not referenced and the rationale for the practice is not explained.
- Line 281: I am more confused here. Does this mean that pseudo-reality is assumed to be more accurate than ERA5 data? I am curious how the streamflow prediction performs with the pseudo-reality historical scenario? If it needs bias correction itself, how can it be used as a reference for other GCMs? Please clarify here.
- Lines 310-312: why red coloring suggests better performance than equal weighting. The metric here is bias, so my understanding is that it depends on the estimation errors of the equal method. If the equal method already has a negative bias (i.e. red in Figure 4a), doesn't the negative difference in Figure 4b-f (i.e. red) imply an even worse case? I hope I have just misunderstood something.
- Lines 326-329: Is it possible to explain why the spatial distribution pattern is as it is, i.e. why western and south-eastern catchments have a negative bias? Understanding the underlying reasons will be helpful in knowing what potential geographical factors are being referred to here.
- Section 4.1: This is more of a repetition of the introduction. Might consider merging it with the introduction.
- Lines 487-489: As an important conclusion, which is also highlighted in the abstract, it would be better here to articulate how it is supported by the results of this study.Overall, while the manuscript provides a comprehensive analysis, it needs more clarity in its hypotheses, a clearer presentation and a better defined research gap to improve the overall quality of the manuscript. More scientific discussion (e.g. the robustness and applicability of the conclusion) is needed to enhance its significance and scientific contribution.
Citation: https://doi.org/10.5194/egusphere-2024-1183-RC1 - AC1: 'Reply on RC1', Mehrad Rahimpour Asenjan, 30 Jul 2024
-
RC2: 'Comment on egusphere-2024-1183', Anonymous Referee #2, 17 Jul 2024
Review of "Assessing the Hydrological Impact Sensitivity to Climate Model Weighting Strategies " by Asenjan et al.
- General comments
The paper is generally well written and presented but suffers from the use of terms and assumptions that ruin the validity of the work that was carried out. Furthermore, many choices made lack justification and a discussion on the alternatives available.
While I acknowledge the value of using a GCM as “pseudo-reality” I find the alleged aim “to seek accuracy and reliability in future hydrological runs” an outrageous claim. Even making a clear disclaimer at the beginning of the paper (which is not there), these terms are simply not applicable in this context. I think the authors should have approached this work under an uncertainty reduction effort type of exercise as opposed to an accuracy effort.
Moreover, there are important concepts that the authors have neglected: 1. model weighting schemes are subjective and not necessarily overcome the 1 model 1 vote (model democracy) – a discussion on this would have provided a more solid basis to their work 2. Assuming that a model that performs well in the control period should be deemed more credible in the future period than other models that did not is an assumption that is debatable and should be at least discussed. 3. If a few runs are “outliers” it doesn’t necessarily mean that they will get it wrong, the other majority could be wrong too, that is why model weighting is a tricky job and cannot live without a comprehensive assessment of uncertainty associated.
For these reasons I think this paper is unfit for publication.
I have noted specific comments below.
- Specific comments (section addressing individual scientific questions/issues)
- 16 - “Various weighting schemes address these concerns, but their effectiveness in impact studies, which integrate GCM outputs with separate impact models, remains unclear.” Their effectiveness will remain unclear until the future unfolds and we find out what actually happened.
- 35-36 – Accuracy and Reliability are concepts that pertain to weather forecasting and sub-seasonal to seasonal prediction, I find it misleading to use them in the context of climate/hydrological projections.
- 48 – “This variability is widely recognized as a primary source of uncertainty”. This variability is only fair and must exist if we want to describe and account for plausible future of the variable of interest.
- 57 AND L. 60-70 – I think it is more complex than that. “While model democracy has been successful in replicating the mean state of the observed historical climate (Reichler & Kim, 2008), its applicability and reliability in future impact assessments remain uncertain.” Broadly speaking model democracy makes no judgement on the different models of the ensemble, while weighting or excluding runs implies some sort of judgement that is not easy to justify and often ends being, to some degree, subjective. In particular, not everyone agrees with the fact that if a model replicates decently a given variable in the control period, it will continue to do so in the future. Therefore: “suggesting model democracy might not be the best choice in regions where some models are more reliable” – this “more reliable” is a crucial concept in this issue of model democracy – vs. – model selection/weighting because measuring reliability of future quantities simply isn’t possible until the future unfolds.
One may find indications based on assumptions, but I think the ease with which some concepts are reported and treated is misleading (accuracy and reliability).
L.73-74 – “demonstrating more accurate projections compared to simple averaging”. Same as above: how can you “demonstrate” accuracy in the future?
- 91 – I would add Giuntoli et al. 2021 on the effect of weighting via streamflow data.
- 97 – On Pseudo observed – model as truth, imperfect model test. Please explain to the reader that the limited number of simulations involved in earlier studies did not allow to separate the inference of internal variability from structural differences among the models. With large-ensemble initial condition simulations there is the advantage of having many simulations that can be used as pseudo-observations (Deser et al. 2020).
- 115 – “significance” – please avoid terms that have to do with statistical testing.
- 119 – “the objective is to understand the complex interactions between weighting schemes and their effects on […]” – I don’t think this study allows to understand the complex interactions between weighting schemes, it simply conducts a sensitivity with a bunch of them. I suggest more realistic words for the objective.
- 123-127 – Again, I struggle with the terms “accuracy” and “reliability” in the context of hydrological projections.
I would either write a clear disclaimer in the introduction, or would consider using other terms.
- 131 – The data set includes ~14k catchments from which ~3k were chosen. The selection criteria are debatable. See Giuntoli et al. 2015, where catchments are selected to be of comparable size to the gridcell resolution of the global models. Therefore a minimum area of 500 km2 is fair, to avoid flashy catchments, but there should be an upper limit too.
L.138 – perform comparably to observation data in hydrological modelling is very vague. It really depends on the variable and the location. This point should be further justified and described. I would also add citation on the assessment of ERA5 precipitation by Lavers et al. 2022.
- 145 – Figure 1, what is the purpose of indicating mean annual temperature?
- 170 – in addition to the reference for the method (Cannon 2018), there should be an explanation for using bias correction based on pseudo-reality GCM data with references and limitations.
- 175 – What is the spatial and temporal resolution of the HMETS model? There should be a description of the resolution at which GCMs are input into HMETS and the resolution that HMETS outputs.
- 183 – Six weighting schemes are adopted. Why? What led you to this choice? Review of other studies employing weighting schemes? E.g. in the hydrological community it has been shown how far off hydrological models fed by GCMs are from observations, in particular with regards to extremes (e.g. floods). Some have attempted weighting the models (or selecting them) on the basis of their ability to reproduce the right timing of hydrological events as opposed to the quantity (m3/s) e.g. Giuntoli et al. 2021.
- 188 – Monthly observed and simulated series. Is there a downgrading of the data from daily to monthly temporal scale? Also, “observed” in this case is the pseudo-reality series? If so, I find using “observed” confusing for the reader.
- 195 – assessing how a GCM aligns with the multi model mean in future projections. I think the assumption made by this metric is wrong: a model that does not align with the majority of the models is deemed less credible. What if that model is the only one that gets it right?
Again, observations are mentioned for the REA metric, are these observations or pseudo-reality model runs?
- 211 – Skill metric. This metric implies that models that reproduce observed data the best are to be trusted more in their projections. This assumption should be clarified and discussed because it is not necessarily true that models that do well in the past will continue to do so in the future. To some extent this characteristic increases their credibility, but does not ensure improvements in future performances.
- 265 – On the need to bias correct GCM output. Please justify this choice further, mentioning drawbacks for bias correcting, e.g.: the introduction of an additional source of uncertainty; the reduction of inter-GCM variability. I would cite Ehret et al. 2012.
- 272 – It seems like there is one and only hydrological model that routes streamflow using precipitation and temperature. A hinted above, descriptions of other options (models) available is needed.
- 281-282 – Again, the use of the term accuracy is far-fetched. “accurately” capturing the key underlying hydrological processes is wishful thinking, unattainable with the plethora of GCMs run at coarse resolution, then bias-corrected, then fed to a hydrological model that uses P and T as input. The cascade of uncertainty is such that the accuracy you look for is simply not there. You can put effort on trying to minimize uncertainty, but cannot write about accuracy here.
- 283-285 – I don’t understand this statement: “as long as the processes are reasonably represented (are you assessing this?) the model performance is not of critical concern”. Which model? And why it is not a concern?
- 291-292 – Why are the two methods supposed to yield similar results?
References:
Giuntoli, I., et al. (2021). Going beyond the ensemble mean: Assessment of future floods from global multi-models. Water Resources Research, 57, e2020WR027897. https://doi.org/10.1029/2020WR027897
Deser, C., et al., 2020: Insights from Earth system model initial-condition large ensembles and future prospects. Nat. Climate Change, 10, 277–286, https://doi.org/10.1038/s41558-020-0731-2
Giuntoli, I., et al. (2015), Evaluation of global impact models' ability to reproduce runoff characteristics over the central United States, J. Geophys. Res. Atmos., 120, 9138–9159, doi:10.1002/2015JD023401
Lavers, D.A., et al. (2022) An evaluation of ERA5 precipitation for climate monitoring. Quarterly Journal of the Royal Meteorological Society, 148(748) 3124–3137. Available from: https://doi.org/10.1002/qj.4351
Ehret, U., et al. (2012), J.: HESS Opinions “Should we apply bias correction to global and regional climate model data?”, Hydrol. Earth Syst. Sci., 16, 3391–3404, doi:10.5194/hess-16-3391-2012
Citation: https://doi.org/10.5194/egusphere-2024-1183-RC2 - AC2: 'Reply on RC2', Mehrad Rahimpour Asenjan, 30 Jul 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-1183', Anonymous Referee #1, 04 Jul 2024
Summary:
The manuscript titled "Assessing the Hydrological Impact Sensitivity to Climate Model Weighting Strategies" evaluates the impact of different weighting strategies for GCM outputs on streamflow projections. The study uses the concept of pseudo-reality to address the challenge of lacking future observations.Strengths:
1) The manuscript provides a comprehensive analysis of various weighting strategies, which could be helpful for further studies.
2) The use of pseudo-reality to overcome the lack of future observations can be useful and, if well-explained, could provide insights into model weighting strategies.Weaknesses:
1) However, the actual scientific contribution of the paper is difficult to assess as the research gap is not clearly defined. For example, a specific aim of the study is to investigate how the selection of assessment criteria influences the results of climate model weighting in hydrological assessments (line 125), while it seems that this question has been well discussed in the literature (lines 90-95). Although it is further mentioned that it is challenged by the lack of future observations and can potentially be addressed with pseudo-reality, the research questions are still not clear. What is the state of the art and what are the implications of the study regarding the use of pseudo-reality for weighting schemes?2) The rationale for using pseudo-reality to address the lack of future observations is not well-explained. More detailed justification and explanation of the hypothesis behind pseudo-reality are needed, and citations alone are not sufficient to justify the effectiveness of the practice in the context of this study. Also, are there alternatives to pseudo-reality and how can it be ensured that the study's conclusion about the effectiveness of weighting is not just a product of this particular technique? Some further discussion is needed.
3) The manuscript's presentation needs improvement. For example, some statements lack references (e.g. lines 100-105), and some paragraphs need to be more coherent (e.g. the last two paragraphs in the introduction are repetitive, where many aims/goals/objectives are presented). Also, some tables and figures are confusing. For example, the meaning of the ID number in Table 1 is unclear, and it appears just the one used in Figure 2. What is the order of the climate models in Figure 2? Is it just random? If so, the figure is not informative and I would suggest that it be ordered by ECS or some similar metric to make it meaningful.
Specific questions:
- Line 156: what specific statistical metric is being compared?
- Line 159: how is the choice of weighting model related to climate sensitivity here, i.e. why can the finding of different ECSs highlight the potential importance of the choice of weighting model? Please clarify.
- Lines 169-170: Please justify the operation. Why is this not done using the corresponding GCM data?
- For the HMETS model, no performance seems to be reported in the paper.
- Line 271: Yes, I agree, but how are the GCM meteorological forcing matched to the lumped hydrological model in the study, where the minimum basin size is 500 km2.
- Line 276: The statement is still not referenced and the rationale for the practice is not explained.
- Line 281: I am more confused here. Does this mean that pseudo-reality is assumed to be more accurate than ERA5 data? I am curious how the streamflow prediction performs with the pseudo-reality historical scenario? If it needs bias correction itself, how can it be used as a reference for other GCMs? Please clarify here.
- Lines 310-312: why red coloring suggests better performance than equal weighting. The metric here is bias, so my understanding is that it depends on the estimation errors of the equal method. If the equal method already has a negative bias (i.e. red in Figure 4a), doesn't the negative difference in Figure 4b-f (i.e. red) imply an even worse case? I hope I have just misunderstood something.
- Lines 326-329: Is it possible to explain why the spatial distribution pattern is as it is, i.e. why western and south-eastern catchments have a negative bias? Understanding the underlying reasons will be helpful in knowing what potential geographical factors are being referred to here.
- Section 4.1: This is more of a repetition of the introduction. Might consider merging it with the introduction.
- Lines 487-489: As an important conclusion, which is also highlighted in the abstract, it would be better here to articulate how it is supported by the results of this study.Overall, while the manuscript provides a comprehensive analysis, it needs more clarity in its hypotheses, a clearer presentation and a better defined research gap to improve the overall quality of the manuscript. More scientific discussion (e.g. the robustness and applicability of the conclusion) is needed to enhance its significance and scientific contribution.
Citation: https://doi.org/10.5194/egusphere-2024-1183-RC1 - AC1: 'Reply on RC1', Mehrad Rahimpour Asenjan, 30 Jul 2024
-
RC2: 'Comment on egusphere-2024-1183', Anonymous Referee #2, 17 Jul 2024
Review of "Assessing the Hydrological Impact Sensitivity to Climate Model Weighting Strategies " by Asenjan et al.
- General comments
The paper is generally well written and presented but suffers from the use of terms and assumptions that ruin the validity of the work that was carried out. Furthermore, many choices made lack justification and a discussion on the alternatives available.
While I acknowledge the value of using a GCM as “pseudo-reality” I find the alleged aim “to seek accuracy and reliability in future hydrological runs” an outrageous claim. Even making a clear disclaimer at the beginning of the paper (which is not there), these terms are simply not applicable in this context. I think the authors should have approached this work under an uncertainty reduction effort type of exercise as opposed to an accuracy effort.
Moreover, there are important concepts that the authors have neglected: 1. model weighting schemes are subjective and not necessarily overcome the 1 model 1 vote (model democracy) – a discussion on this would have provided a more solid basis to their work 2. Assuming that a model that performs well in the control period should be deemed more credible in the future period than other models that did not is an assumption that is debatable and should be at least discussed. 3. If a few runs are “outliers” it doesn’t necessarily mean that they will get it wrong, the other majority could be wrong too, that is why model weighting is a tricky job and cannot live without a comprehensive assessment of uncertainty associated.
For these reasons I think this paper is unfit for publication.
I have noted specific comments below.
- Specific comments (section addressing individual scientific questions/issues)
- 16 - “Various weighting schemes address these concerns, but their effectiveness in impact studies, which integrate GCM outputs with separate impact models, remains unclear.” Their effectiveness will remain unclear until the future unfolds and we find out what actually happened.
- 35-36 – Accuracy and Reliability are concepts that pertain to weather forecasting and sub-seasonal to seasonal prediction, I find it misleading to use them in the context of climate/hydrological projections.
- 48 – “This variability is widely recognized as a primary source of uncertainty”. This variability is only fair and must exist if we want to describe and account for plausible future of the variable of interest.
- 57 AND L. 60-70 – I think it is more complex than that. “While model democracy has been successful in replicating the mean state of the observed historical climate (Reichler & Kim, 2008), its applicability and reliability in future impact assessments remain uncertain.” Broadly speaking model democracy makes no judgement on the different models of the ensemble, while weighting or excluding runs implies some sort of judgement that is not easy to justify and often ends being, to some degree, subjective. In particular, not everyone agrees with the fact that if a model replicates decently a given variable in the control period, it will continue to do so in the future. Therefore: “suggesting model democracy might not be the best choice in regions where some models are more reliable” – this “more reliable” is a crucial concept in this issue of model democracy – vs. – model selection/weighting because measuring reliability of future quantities simply isn’t possible until the future unfolds.
One may find indications based on assumptions, but I think the ease with which some concepts are reported and treated is misleading (accuracy and reliability).
L.73-74 – “demonstrating more accurate projections compared to simple averaging”. Same as above: how can you “demonstrate” accuracy in the future?
- 91 – I would add Giuntoli et al. 2021 on the effect of weighting via streamflow data.
- 97 – On Pseudo observed – model as truth, imperfect model test. Please explain to the reader that the limited number of simulations involved in earlier studies did not allow to separate the inference of internal variability from structural differences among the models. With large-ensemble initial condition simulations there is the advantage of having many simulations that can be used as pseudo-observations (Deser et al. 2020).
- 115 – “significance” – please avoid terms that have to do with statistical testing.
- 119 – “the objective is to understand the complex interactions between weighting schemes and their effects on […]” – I don’t think this study allows to understand the complex interactions between weighting schemes, it simply conducts a sensitivity with a bunch of them. I suggest more realistic words for the objective.
- 123-127 – Again, I struggle with the terms “accuracy” and “reliability” in the context of hydrological projections.
I would either write a clear disclaimer in the introduction, or would consider using other terms.
- 131 – The data set includes ~14k catchments from which ~3k were chosen. The selection criteria are debatable. See Giuntoli et al. 2015, where catchments are selected to be of comparable size to the gridcell resolution of the global models. Therefore a minimum area of 500 km2 is fair, to avoid flashy catchments, but there should be an upper limit too.
L.138 – perform comparably to observation data in hydrological modelling is very vague. It really depends on the variable and the location. This point should be further justified and described. I would also add citation on the assessment of ERA5 precipitation by Lavers et al. 2022.
- 145 – Figure 1, what is the purpose of indicating mean annual temperature?
- 170 – in addition to the reference for the method (Cannon 2018), there should be an explanation for using bias correction based on pseudo-reality GCM data with references and limitations.
- 175 – What is the spatial and temporal resolution of the HMETS model? There should be a description of the resolution at which GCMs are input into HMETS and the resolution that HMETS outputs.
- 183 – Six weighting schemes are adopted. Why? What led you to this choice? Review of other studies employing weighting schemes? E.g. in the hydrological community it has been shown how far off hydrological models fed by GCMs are from observations, in particular with regards to extremes (e.g. floods). Some have attempted weighting the models (or selecting them) on the basis of their ability to reproduce the right timing of hydrological events as opposed to the quantity (m3/s) e.g. Giuntoli et al. 2021.
- 188 – Monthly observed and simulated series. Is there a downgrading of the data from daily to monthly temporal scale? Also, “observed” in this case is the pseudo-reality series? If so, I find using “observed” confusing for the reader.
- 195 – assessing how a GCM aligns with the multi model mean in future projections. I think the assumption made by this metric is wrong: a model that does not align with the majority of the models is deemed less credible. What if that model is the only one that gets it right?
Again, observations are mentioned for the REA metric, are these observations or pseudo-reality model runs?
- 211 – Skill metric. This metric implies that models that reproduce observed data the best are to be trusted more in their projections. This assumption should be clarified and discussed because it is not necessarily true that models that do well in the past will continue to do so in the future. To some extent this characteristic increases their credibility, but does not ensure improvements in future performances.
- 265 – On the need to bias correct GCM output. Please justify this choice further, mentioning drawbacks for bias correcting, e.g.: the introduction of an additional source of uncertainty; the reduction of inter-GCM variability. I would cite Ehret et al. 2012.
- 272 – It seems like there is one and only hydrological model that routes streamflow using precipitation and temperature. A hinted above, descriptions of other options (models) available is needed.
- 281-282 – Again, the use of the term accuracy is far-fetched. “accurately” capturing the key underlying hydrological processes is wishful thinking, unattainable with the plethora of GCMs run at coarse resolution, then bias-corrected, then fed to a hydrological model that uses P and T as input. The cascade of uncertainty is such that the accuracy you look for is simply not there. You can put effort on trying to minimize uncertainty, but cannot write about accuracy here.
- 283-285 – I don’t understand this statement: “as long as the processes are reasonably represented (are you assessing this?) the model performance is not of critical concern”. Which model? And why it is not a concern?
- 291-292 – Why are the two methods supposed to yield similar results?
References:
Giuntoli, I., et al. (2021). Going beyond the ensemble mean: Assessment of future floods from global multi-models. Water Resources Research, 57, e2020WR027897. https://doi.org/10.1029/2020WR027897
Deser, C., et al., 2020: Insights from Earth system model initial-condition large ensembles and future prospects. Nat. Climate Change, 10, 277–286, https://doi.org/10.1038/s41558-020-0731-2
Giuntoli, I., et al. (2015), Evaluation of global impact models' ability to reproduce runoff characteristics over the central United States, J. Geophys. Res. Atmos., 120, 9138–9159, doi:10.1002/2015JD023401
Lavers, D.A., et al. (2022) An evaluation of ERA5 precipitation for climate monitoring. Quarterly Journal of the Royal Meteorological Society, 148(748) 3124–3137. Available from: https://doi.org/10.1002/qj.4351
Ehret, U., et al. (2012), J.: HESS Opinions “Should we apply bias correction to global and regional climate model data?”, Hydrol. Earth Syst. Sci., 16, 3391–3404, doi:10.5194/hess-16-3391-2012
Citation: https://doi.org/10.5194/egusphere-2024-1183-RC2 - AC2: 'Reply on RC2', Mehrad Rahimpour Asenjan, 30 Jul 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
254 | 112 | 21 | 387 | 27 | 13 | 10 |
- HTML: 254
- PDF: 112
- XML: 21
- Total: 387
- Supplement: 27
- BibTeX: 13
- EndNote: 10
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1