the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Decadal predictions of wind, solar and compound power indicators to support the European renewable energy sector
Abstract. Renewable energy production is strongly influenced by climate variability and change, making the energy sector sensitive to fluctuations on decadal timescales. Decadal climate predictions, which aim to forecast climate variability over the next few years, therefore offer potential value for anticipating near-term changes in wind and solar resources and supporting climate-informed energy planning. However, the predictive skill of decadal forecasts for energy-relevant indicators remains poorly quantified, which is crucial to know the potential usability of any forecast product.
This study evaluates the skill of decadal climate predictions over Europe for forecast years 1–3 using a multi-model ensemble from the Coupled Model Intercomparison Project Phase 6 (CMIP6) Decadal Climate Prediction Project (DCPP). We assess three energy-relevant indicators: photovoltaic potential (PVpot), wind capacity factor (WCF), and a compound indicator describing the number of energy drought days (NED), defined as days with inefficient production from both wind and solar resources. The skill is evaluated against the ERA5 reanalysis, and the added value of the model initialization is estimated by comparing the decadal predictions against the non-initialized historical forcing simulations. PVpot exhibits the highest and most spatially homogeneous skill for annual, spring and summer aggregations, closely reflecting the high predictability of surface solar radiation. WCF shows low and spatially heterogeneous skill, consistent with the high intrinsic variability of wind. The compound NED indicator displays strong seasonal dependence: its predictability is largely controlled by solar conditions in high-radiation seasons and by wind in winter and autumn. Model initialization generally provides added value where historical simulations already show some skill, especially for PVpot, while its impact is lower for WCF. This work shows the specific seasons, regions and energy indicators for which decadal predictions can provide actionable climate information to support renewable energy applications.
- Preprint
(6662 KB) - Metadata XML
-
Supplement
(9586 KB) - BibTeX
- EndNote
Status: open (until 23 May 2026)
-
RC1: 'Comment on egusphere-2026-1205', Anonymous Referee #1, 17 Apr 2026
reply
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2026/egusphere-2026-1205/egusphere-2026-1205-RC1-supplement.pdfReplyCitation: https://doi.org/
10.5194/egusphere-2026-1205-RC1 -
RC2: 'Comment on egusphere-2026-1205', Anonymous Referee #2, 27 Apr 2026
reply
This paper analyses the skill over Europe of energy relevant variables from decadal prediction systems, for forecast years 1-3. It shows in which regions and seasons the variables are skilful, and compares the skill to uninitialised climate simulations to show where initialisation can enhance or reduce the skill.
Although other studies of predictive skill of energy relevant variables on decadal timescales are beginning to emerge, to my knowledge the results presented for this particular forecast timescale (<5 years) are novel, as well as those for the compound indicator. However, I have a concern that the difference in ensemble size between DCPP and HIST simulations may be contributing to the enhanced DCPP skill in regions where HIST already has skill, rather than it being solely due to initialisation. If the authors can address this, as well as a few other issues outlined below, I think the study will be worthy of publication.
General comments:
1. As mentioned above, my main concern is that the interpretation of differences in skill between DCPP and HIST simulations is due to initialisation could be hampered by the different ensemble sizes: The DCPP ensemble is more than twice as large as the HIST one, and should therefore have a better representation of the forced signal. One way to test this could be to randomly sample DCPP sub-ensembles that are the same size as the HIST one and see the effect on the results.
2. The results section is hard to follow. The descriptions of the figures are long and detailed, and in some cases may be over-interpreting noise. For example L236: “In SON, ResCorr values are mostly positive but small, suggesting that initialization provides a limited additional contribution compared to the forced trend” – in this case the differences are statistically insignificant, so I don’t think you can infer anything here. Overall I suggest shortening the descriptions to bring out the main messages.
3. The reason for focussing on the 3 year timescale (as opposed to other <10 year timescales) is not explained. Is that timescale of particular interest to the energy community?
Minor comments
1. L65: The paper quotes “five different decadal forecasting systems”, but it looks like only three from table S1. What’s the difference between EC-Earth3 i1, i2 and i4?
2. L83: I think it would make more sense to regrid to lowest resolution, as models will not be able to predict features below this spatial scale.
3. L90: “Figure S1 shows the climatological ratio between ERA5 and GWA mean 6-hWIND over 1961–2019, which is applied as a pointwise correction to the ERA5 data”. I didn’t understand how you’ve applied the correction – do you multiply each 6h WIND at each grid point by this ratio? Does the variability look correct after this?
4. L94-102: It is unclear to me how the EQM is performed. For the historical simulations, if you take per calendar year per member, do you take, for example, simulated year 1990 from member 1 and map the quantiles to 1990 in the reference dataset (corrected ERA5)? If so, as 1990 was a very windy year in some regions, will that inflate the skill? You mention cross-validation, but if you remove the year in question, which years do you use as the reference? Also, is EQM performed at each grid point?
5. L206-215: comparison of ResCorr to HIST for PVpot (and where it appears in other sections) – it’s hard to compare across figures in the main paper and supplementary material. I suggest either including HIST ACC plots in the main paper as additional panels, or you could show contours where HIST has significant skill (e.g. on Fig 1f-j).
6. Fig 2 – It would help to have titles on each panel. This also applies to Fig 4 and 6 where the panel titles aren’t given in the caption.
Technical/typos:
1. There seems to be an issue with some of the referencing. For example L21 “Agency, 2022”, L30 “Commission, 2020”; L34 “Association et al. 2023”.
2. L168 “the ACC between the DCPP of the indicator and ERA5” - do you mean “the ACC of the indicator between DCPP and ERA5”? Similar phrases appear elsewhere, e.g. L174.
3. In the interannual variability plots (Figs S6, S9, S11) – is it interannual variability, or variability of 3-year means?
4. Fig S5: labelling – the top row is labelled f-j, the middle row is a-e – unclear which is TAS and RSDS.
5. Fig 2b – Scandinavia DJF has correlation of 0.00 but it is labelled as statistically significant.
6. L225 “For Scandinavia and Great Britain and Ireland, trends generally disagree and correlations are low, with a few exceptions (Scandinavia in the annual mean and SON and Great Britain and Ireland in MAM), although significant correlations are limited to the the annual mean in Scandinavia and MAM in Great Britain and Ireland.” I found this confusing – isn’t the second part repeating what’s in the brackets?
7. Label for Fig S14 – says same as Fig S5 – do you mean S7?
8. L438 “multi-model ensemble”
Citation: https://doi.org/10.5194/egusphere-2026-1205-RC2
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 787 | 455 | 84 | 1,326 | 1,392 | 79 | 89 |
- HTML: 787
- PDF: 455
- XML: 84
- Total: 1,326
- Supplement: 1,392
- BibTeX: 79
- EndNote: 89
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1