the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Value of seasonal flow forecasts for enhancing reservoir operation and drought management in South Korea
Abstract. Drought poses significant challenges across various sectors such as agriculture, water resources, environment, and energy. In the past few decades, numerous devastating droughts have been reported worldwide including in South Korea. A recent drought in South Korea, which lasted from 2013 to 2016, led to significant consequences including water restrictions and nationwide crop failures. Historically, reservoirs have played a crucial role in mitigating hydrological droughts by ensuring water supply stability. With exacerbating intensity and frequency of droughts attributed to climate change, enhancing the operational efficiency of existing reservoirs for drought management becomes increasingly important. This study examines the value of Seasonal Flow Forecasts (SFFs) in informing reservoir operations, focusing on two critical reservoir systems in South Korea. We assess and compare the value derived from using two deterministic scenarios (worst and 20-year return period drought) and two ensemble forecasts products (SFFs and Ensemble Streamflow Prediction, ESP). Our study proposes an innovative method for assessing forecast value, providing a more intuitive and practical understanding by directly relating it to the likelihood of achieving better operational outcomes compared to historical operation. Furthermore, we analyse the sensitivity of forecast value to key choices in the set-up of the simulation experiments. Our findings indicate that while deterministic scenarios show higher accuracy, forecast-informed operations with ensemble forecasts tend to yield greater value. This highlights the importance of considering the uncertainty of flow forecasts in operating reservoirs. Although SFFs generally show higher accuracy than ESP, the difference in value between these two ensemble forecasts is found to be negligible. Last, the sensitivity analysis shows that the method used to select a compromise release schedule between competing operational objectives is a key determinant of forecast value, implying that the benefits of using seasonal forecasts may vary widely depending on how priorities between objectives are established.
- Preprint
(1520 KB) - Metadata XML
-
Supplement
(1695 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-1985', Louise Arnal, 13 Aug 2024
In this manuscript, the authors present a comparison between various ensemble forecasts and deterministic scenarios for informing reservoir operations for two reservoir systems and three drought events in South Korea. The evaluate gains in terms of the forecast accuracy and skill, and in terms of the operational value related to storage and supply. They additionally test the sensitivity of the results to methodological choices, decisions usually implicitly made in research studies. The research questions are interesting and the manuscript is overall clear and concise. Below are a few comments that I hope will be helpful to revise the manuscript for publication.
General points:
-A clarification of the methods is needed, especially regarding the lead times and the decision-making time steps. For example, how can a decision be made every two months (i.e., bimonthly) for a forecast with two months lead time? Including a graphic illustrating the timeline between the forecast generation and the last decision made for a single water year would be really helpful, I think.
-More reflection is needed on the plausible physical explanations for some of the results to add some depth. For example, see my comment on L427-429.
-Please discuss the shortcomings associated with evaluating only two attributes of the forecast performance (i.e., accuracy and skill). Calculating more attributes, like correlation, variance and reliability, would give a fuller picture, which could impact the conclusion you draw on L448-450 regarding the link between forecast performance and value.
-Are the codes you developed for the evaluation shared anywhere for others to follow your approach more easily? If not, please consider making them available.
Specific points:
-L23: In the abstract, please specify what key choices you’re looking at.
-L113-114: I would move this last sentence to the conclusions instead as it seems a bit out of place in the introduction.
-L122-130: Could you give a brief description of the hydrological regime of both regions? E.g., When are the peak flows? What drives runoff generation?
-L140: On L39 the dates for this event are 2013-2016. Please clarify.
-L159-162: What is the initialization frequency of the forecasts and what time period is covered by the forecasts you generated for this study?
-L162-163: The time period based on which the correction factors are calculated overlaps with the drought events in 2001-2002 and 2008-2009. This could be an unfair advantage for these events. Please clarify. Same comment for the ESP generation explained on L186.
-L167: Can you briefly list the four different forecasts/scenarios here as well?
-L172-173: What is the temporal aggregation of the forecasts/scenarios, based on which the decisions are made? E.g., Weekly, monthly, etc.
-L173: Please specify how much time there is in between each decision.
-L174: Is the process iteratively conducted at the start of each month? The frequency is unclear.
-Figure 2: Very nice graphic!
->You could refer the readers to each figure compartment being described at the start of each sub-section below.
->Could you specify whether bimonthly refers to twice a month or every two months please?
->It would be useful to add a short section (3.1.4) for the reservoir simulation step, both to have coherence between the numbering of the sections and the boxes in the figure (i.e., step 1 is explained in 3.1.1, step 2 in 3.1.2, etc.) and to provide some information on how this is done (e.g., I don't quite understand how decisions are made at different time steps with forecasts that cover different lead times and how often a new forecast is produced).
-L178-193: Please provide more information about the forecasts’ generation, with regards to the: simulation periods, forecast time steps, initialization dates, lead time (please also explain how you define lead time with a concrete example as different research groups define them differently, e.g., lead month 0 vs. 1), ensemble size for the SFF.
-L181-182: Operationally, with what lead times and at what time steps are the decisions made currently by K-water? This would help contextualize your methodological decisions.
-L185-189: Could you comment on the difference in ensemble sizes between the ESP and the SFF and the potential impacts on the performance evaluation?
-L186-187: Could you give a bit more information about the Tank hydrological model, such as its spatial resolution, how it was calibrated, how the initial conditions were obtained, and what its performance in simulation is for the basins considered here.
-L190: I would call the bias correction method a post-processing method rather than a downscaling method, to avoid confusions with downscaling methods used to refine the information granularity.
-L210: Please provide the range of CRPS values. Additionally, at zero, the performance of the SFF would be considered the same as that of the ESP, so there would be no skill associated. Please clarify.
-L233-234: Are these objectives the ones used to generate the pareto front? Please clarify.
-L233-239: How are the ensembles considered in equations 5 and 6? Is the ensemble median used?
-L244: Would it make more sense to calculate and present the forecast accuracy and skill for weekly aggregations rather than monthly, to match the aggregation periods of the SSD and SVD calculations?
-L251-252: It’s unclear to me how various pareto fronts can be averaged. Are the individual solutions comparable across pareto fronts or is this an assumption? Please clarify.
-L254: I thought there were one million solutions, as per L249-250?
-Figure 3: I would suggest writing out the acronyms (e.g., MCDM, WCD, etc.) in a table footer or in the caption so that the table could be understood as a standalone item.
-L337-338: Could you give us an indication of, for example, the spread of values and the mean per lead time? Here, interestingly the overall skill increases with increasing lead time. Could you infer some reasons for the skill increasing or decreasing with lead time for the various events and reservoirs in the results?
-Figure 5:
->It might be more coherent with section 3.1.2 to show the storage volume deficit instead of the storage volume.
->Could you label the dotted red-ish line at the top of each storage volume plot?
-L355: Was the wet event captured by the SFF? Knowing this could help explain some of the behaviours we can see in Fig. 5.
-L355-357: Please expand on how we can see that the deterministic scenarios offer slightly superior results for securing storage volume compared to the ensemble forecasts on the figure. E.g., is the reservoir replenished faster? However, if the SFF knew that there was a rainfall event coming up, couldn't we expect that it recommends filling up the reservoir later to avoid losses linked with an overestimation of the storage by the end of the water year? Then, it wouldn't be fair to say that the deterministic scenarios offer superior results to secure storage volume over the SFF if the reservoir is fuller faster. Please expand on this in your results.
-L371: Could the circles count be included somewhere in the text, figure or in a table?
-L380-388: “as the impact of forecast-informed operations accumulates” hints that the value of model-based “dynamic” forecasts has the potential to be even greater for longer drought events. This is a really interesting finding that I think would be nice to include in the discussion.
-L389: Could the sensitivity results also be impacted by the different sample sizes of the experimental choices? Bootstrapping could help characterize some of the results’ uncertainty.
-L396-397: I think that the forecast value here refers to gains both in terms of the SSD and the SVD, but please remind readers here. Please also remind us here what the benchmark is.
-Figure 8: Should the dates in the legend be September 30th instead of October 1st, to match the legend of Fig. 7?
-L423: Can you make any educated guess with regards to why there is a lot of variability in the MCDM method results with events and reservoir systems?
-L427-429: Why are we seeing those differences in the forecast value between the two regions? Does that somehow correlate with the skill of the seasonal meteorological forecasts in those regions or with how decisions were made historically? And what could explain the higher value of the SFF for the earlier event in the Soyanggang-Chungju reservoir system?
-L429-430: Why does increasing lead time lead to higher value?
-L434-435: Please clarify in the text (and in the caption) that the y axis shows the value tallied over the 8 MCDM methods.
-Figure 9: I think that the lines are a bit distracting in this figure. Could they be removed, with four different symbols used instead to represent the different events and reservoir systems?
-L444: Please explain what the “perfect forecast” is in the caption and/or early on in the text.
-L475: Except for the Soyanggang-Chungju earlier event.
-L497: I don't understand why the method that prioritizes storage (over supply?) is more suitable for high risks linked with supply deficit. Could you please elaborate a bit for readers not as familiar with reservoir management?
-L503-508: This is a repetition of the first discussion paragraph. Please consider combining both.
Technical corrections:
-L181: “analyzing” instead of “comparing”?
-L249: Pareto.
-L335: “outperforms” instead of “outperforming”.
-L406: “forecast” instead of “forecasts”.
Citation: https://doi.org/10.5194/egusphere-2024-1985-RC1 - AC1: 'Reply on RC1', Yongshin Lee, 18 Nov 2024
- AC3: 'Reply on RC1 - Final version', Yongshin Lee, 19 Nov 2024
-
CC1: 'Comment on egusphere-2024-1985', James McPhee, 13 Oct 2024
Publisher’s note: this comment is a copy of RC2 and its content was therefore removed on 22 Oct 2024.
Citation: https://doi.org/10.5194/egusphere-2024-1985-CC1 -
RC2: 'Comment on egusphere-2024-1985', James McPhee, 21 Oct 2024
Review: egusphere-2024-1985
Summary: this manuscript attempts to show the value of seasonal flow forecasts for reservoir management by applying a simulation/optimization scheme and multicriteria decision making (MCDM) techniques to a reservoir system in South Korea. Seasonal Streamflow Forecasts (SSF) are used to run the model, and the performance is compared to that achieved by ESP and two deterministic scenarios consisting on the worst-case and 20yr observed droughts. The authors propose a metric to evaluate forecast value, consisting on the count value of times a forecast-simulation scenario outperforms the historical operation of the system. Then they evaluate the sensitivity of this forecast value to a series of methodological choices, including forecast lead time, MCDM technique, frequency of decision making, and type of streamflow forecast (deterministic or probabilistic) used.
General comments: The authors tackle an important topic such as optimizing reservoir operations considering ever-increasing pressure on water resources and a less predictable hydrological regime due to climate change. The manuscript is well crafted, with attractive figures and an agile style which makes it enjoyable to read. However, I am unconvinced that the material presented supports many of the claims and generalizations the manuscript makes. A fundamental concern is that of experimental design. The paper studies two reservoirs and three drought events, in what can hardly be considered a sample large enough to support claims about robustness. It appears that the events included are in all cases hydrological droughts with return periods of less than 20 years, because the 20-year drought always underestimates monthly flows. If this is actually the case, the experimental setup is predetermined to favor the ensemble over the deterministic methods, as the latter will underestimate flows and generate a poorer reservoir performance (evaluated ex-post). Concerning the historical operation of the system, from the example shown in figure 5 it appears that reservoir operators did not realize they were facing a drought event until Jun-Jul 2015. After that date they accumulate large supply deficits, which can be more moderate when some forecast information allows hedging at the beginning of the simulation period (late 2014). The discussion section mostly indicates that the results presented here confirm or align well previous research by some of the coauthors or by other groups. It is not easy to determine what is the main, novel, contribution of this paper to the wider body of literature, aside from the methodology for evaluating forecast value based on the count of outperforming scenarios. I would suggest that the authors take advantage of the fact that they have a limited sample of cases and analyzed them in more detail: what are the hydrological characteristics of the drought events under study? How is the interplay between the temporality of monthly flows and demand? Are these reservoirs operated under multi-year criteria? Can operating rules (hedging) be introduced in combination with streamflow forecasts? In this way, they could glean further insights from their experiments. In the current version, many of the claims made in the discussion seem speculative.
Specific comments:
L91. How is the material presented here relevant to the wider community outside of South Korea? All insights that are scalable to other systems should be highlighted.
L112. This contribution does not seem substantial enough to justify the publication of this paper in its current form.
L132. It is said that both reservoirs operate as one, but later in the results section there are shown separately. Why?
L150: Here it is said that PET was computed, but the Penman-Monteith (FAO) metod computes real ET. This was confusing. Also, it is mentioned that evaporation from the reservoirs was neglected. This was surprising, because once you are running a simulation model, adding an empirical evaporation estimate shouldn’t be too challenging. What was the reason to neglect this evaporation term?
L304: replace “simply” with “simple”.
L315: It is difficult in the present manuscript to identify such insights.
L361: I don’t think it is possible to say that “generally” something is true in this context, because the number of scenarios analyzed is very limited.
L406. One problem of using the scenario count as forecast value is that it does not take into account the magnitude of the deficits generated in either of the scenarios tested.
L422-431. Here we see various results, but little analysis goes into gleaning the reasons why the MCDM method impacts differently the forecast value for either reservoir and drought. Before it was said that both reservoirs are operated jointly, but the results presented here suggest otherwise. Please clarify.
Figure 9: please clarify how is the “general expectation” curve obtained.
L461-462: please provide examples of the release scheduling policies derived by the different methods, in order to substantiate the idea of cautious operation. Also, please clarify the idea behind the concept “adverse events not seen during the optimization”.
L490. The logic behind this sentence is not evident.
L496-502. This is mostly speculation, which I suggest avoiding.
L504-505. Why is this?
L527. I dispute the idea that the results presented here have demonstrated anything, because the number of experiments is limited, and because no actual explanation has been provided for the mechanics of the obtained results.
Citation: https://doi.org/10.5194/egusphere-2024-1985-RC2 - AC2: 'Reply on RC2', Yongshin Lee, 18 Nov 2024
- AC4: 'Reply on RC2 - Final version', Yongshin Lee, 19 Nov 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
304 | 132 | 61 | 497 | 48 | 15 | 14 |
- HTML: 304
- PDF: 132
- XML: 61
- Total: 497
- Supplement: 48
- BibTeX: 15
- EndNote: 14
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1