the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Multi-decadal initialized climate predictions using the EC-Earth3 global climate model
Abstract. Initialized climate predictions are routinely carried out at many global institutions that predict the climate up to next ten years. In this study we present 30 year long initialized climate predictions and hindcasts consisting of 10 ensemble members. We assess the skill of the predictions of surface air temperature on decadal and multidecadal timescales. For the 10 year average hindcasts, we find that there is limited added value from initialization beyond the first decade over a few regions. However, no added value from initialization was found for the third decade (i.e. forecast years 21–29). The ensemble spread in the initialized predictions grows larger with the forecast time, however, the initialized predictions do not necessarily converge towards the uninitialized climate projections within a few years and even decades after initialization. There is in particular a long-term weakening of the Atlantic Meridional Overturning Circulation (AMOC) after initialization that does not recover within the 30 years of the simulations, remaining substantially lower compared to the AMOC in the uninitialized historical simulations. The lower AMOC mean conditions also result in different surface temperature anomalies over northern and southern high latitude regions with cooler temperature in the northern hemisphere and warmer in the southern hemisphere in the later forecast years as compared to the first forecast year. The temperature differences are due to less transport of heat to the northern hemisphere in the later forecast years. These multi-decadal predictions therefore highlight important issues with current prediction systems, resulting in long-term drift into climate states inconsistent with the climate simulated by the historical simulations.
- Preprint
(1656 KB) - Metadata XML
-
Supplement
(623 KB) - BibTeX
- EndNote
Status: open (until 23 May 2025)
-
RC1: 'Comment on egusphere-2025-1208', Anonymous Referee #1, 21 Apr 2025
reply
Summary
The study by Mahmood et al. makes an important step forward in understanding initialized prediction on decadal timescales (out to 30 years), a topic that is only rarely addressed due to computation costs. The results are very interesting, and ultimately seem to hinge on the fact that the initialized runs push AMOC into a weakened state that is outside the range of what is captured by the uninitialized simulations. The authors could enhance the impact of this study by focusing in a bit more on this topic – is this a realistic plausible reality? Why do these differences exist in the Lab Sea convection? And if initialized predictions are perhaps unreliable at this timescale (multi-decadal), should the community be pushing instead towards what the authors refer to as “variability-constrained projections” (which would benefit from even a brief description).
Major Comments:
- The discussion in the first paragraph of Section 3.1 should much more nuanced. The authors state that most land areas show ACC that is positive and statistically significant, with an exception of South America; But looking at Fig 1, there is a lot of stippling indicating things are not significant, even over land. Signals over Australia, for example, are rarely significant. In comparing with Fig S1, again it seems that significance has actually dropped dramatically in the case with fewer years (most notable over the Indian Ocean and Pacific). This calls into question the assertion that “a reduced sample size (due to initializing by every fifth year) does not strongly affect the skill of the prediction system.”
- Figure 1: Given how much of the signal is not significant and thus stippled, I wonder if the fig would be more readable if stippling indicated where things are significant? It’s hard to discern at first glance right now.
- Discussion of Fig 2 notes the different mean state in initialized predictions, but should also discuss the much larger bias as a result.
- The discussion around the Labrador Sea is interesting, as it points to inherent differences between initialized and uninitialized models. Some added context would strengthen the section further; for instance, is a shutdown in the Lab Sea convection rooted in any observations? Is it realistic, or an unwanted model behavior? Why does this happen in initialized runs?
- Line 305: “The climate predictions based on these variability-constrained projections (e.g. Mahmood et al., 2022; Donat et al., 2024) even show higher and more widespread added skill, compared to the initialized predictions presented in this study, for both 10 and 20 year mean predictions.” – this seems like a huge point, but isn’t backed up by comparison within this text. It would make a stronger paper overall to focus in on this as a piece of the larger manuscript.
Minor Comments
- Line 51: What is meant by “constraining patterns of climate variability in large ensembles of the uninitialized projection simulations”? Some added description would help.
- Line 79: “…initialized on the first of November every 5th year…” – is there any rationale for using November? Is this consistent with other experiments? Would differences be expected if initializing in spring/summer instead of fall/winter? (Perhaps the Lab Sea response would differ in particular?)
- Line 90: “…on the order of 10-5 K)” – to clarify, is this meant to be 1e-5? That seems like it might actually be rather large, but I’m typically working in uninitialized space, wherein 1e-14 perturbations are common. Could the authors comment on if this is a standard magnitude for initialized predictions?
- Line 106: “…to a uniform five-degree grid” – that’s really coarse; is there a reason to convert to five degrees? Is this the native resolution of some runs vs others?
- Line 141: “…evaluation periods are different (i.e. 1961-2020 for FY1-10, 1971-2020 for FY11-20 and 1971-2020 for FY21-29).” – Could you limit the evaluation period for FY1-10 to 1971 onwards arbitrarily, just to test the impact?
- Line 149: “…over land regions including northwest Canada and USA.” – I’m not sure I see a lot of added value over the USA, it’s mostly stippled? But Australia and the Middle East seem to show something more cohesive.
- Line 162: “Previous studies have shown that the decadal predictions can be highly skillful in forecasting temperature over SPNA region.” – Citations?
- Fig 2: Please add a legend for the lines, in addition to the description the caption for easier readability.
- Fig 5: Suggest adding stippling for significance, as in other maps
Citation: https://doi.org/10.5194/egusphere-2025-1208-RC1
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
89 | 101 | 6 | 196 | 18 | 5 | 5 |
- HTML: 89
- PDF: 101
- XML: 6
- Total: 196
- Supplement: 18
- BibTeX: 5
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1