the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Assessing the ability of the ECMWF seasonal prediction model to forecast extreme September-to-November rainfall events over Equatorial Africa
Abstract. This study investigates the predictability of rainfall over Equatorial Africa (EA) and evaluates the forecasting performance of the European Centre for Medium-Range Weather Forecasts fifth-generation seasonal forecast version 5.1 (ECMWF-SEAS5.1) for the September–November (SON) periode during 1981–2023 (43 years). The analysis considers two lead-times, focusing on initial conditions (ICs) from September and August. Regression, spatiotemporal and composite analyses are applied to highlight the relationship between extreme precipitation events over EA and the various associated atmospheric circulation drivers. The analysis reveals that ECMWF-SEAS5.1 successfully reproduces the observed annual precipitation cycle and seasonal spatial pattern of rainfall over the region for both ICs, with notably better skills for September. In addition, the model effectively captures the teleconnections between EA rainfall and tropical sea surface temperature, including the Indian Ocean dipole and El Niño-Southern Oscillation, for both ICs. Regions with highest potential predictability skills coincide with regions where the model accurately represents strong (weak) composite rainfall anomalies, associated with strong (weak) moisture flux convergence (divergence) values, although the magnitude tends to be underestimated. However, other important observed features, such as the components of the African easterly jet, are well represented by the model for the September IC, but not for August. While many atmospheric mechanisms driving precipitation in the region are well simulated, their underestimation likely explains the model’s general tendency to underestimate the magnitude of extreme rainfall events. The results of this study support efforts to improve forecast outputs in the national national weather services across the region by integrating ECMWF model outputs into operational weather bulletins.
- Preprint
(5751 KB) - Metadata XML
-
Supplement
(1187 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-2656', Indrani Roy, 16 Sep 2025
-
AC1: 'Reply on RC1', Hermann Nana, 24 Nov 2025
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2025/egusphere-2025-2656/egusphere-2025-2656-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Hermann Nana, 24 Nov 2025
-
RC2: 'Comment on egusphere-2025-2656', Anonymous Referee #2, 27 Nov 2025
Review of the manuscript entitled “Assessing the ability of the ECMWF seasonal prediction model to forecast extreme September-to-November rainfall events over Equatorial Africa”, submitted by Nana et al. to Natural Hazards and Earth System Sciences (NHESS).
In this study, the performance of the latest version of the SEAS5 seasonal prediction model (version 5.1; SEAS5.1) in simulating rainfall interannual variability, extreme events and their main drivers over the Equatorial Africa (EA) region in the September to November season (SON) is evaluated. By means of correlation, regression and composite analysis, the results suggest that SEAS5.1 shows significant skill, mainly over the eastern sector of the study area, as well as over the western coast, and this skill is mainly linked to teleconnections with the IOD and ENSO. Furthermore, the model appears to reproduce the underlying physical mechanisms explaining the rainfall variability in EA reasonably well. Therefore the authors suggest that climate services in EA can benefit from including SEAS5.1 when evaluating predictability.
In general terms, the findings of this study are of interest to the scientific community and useful for regional climate services in the EA region. The topic also fits within the aims and scope of NHESS. However, I believe that there are a number of relevant issues that need to be addressed before I could recommend publication. Please find below my general points, followed by more specific points and other minor comments.
General comments:
- The motivation of the study is not made sufficiently clear. Thus, the authors need to highlight the novelty of the study in comparison with recent ones, in particular with the study by Tefera et al. (2025) which already characterises the relationship between rainfall in OND and ENSO-IOD in observations and in C3S seasonal models and also examines some extreme years. Therefore, the authors need to clarify what gap the present study intends to fill. Is it that the previous studies did not specifically use version 5.1 of SEAS5? Is it that large-scale drivers were not addressed before for the model?
- The current structure of the Introduction makes it somewhat difficult to follow, and a more streamlined presentation would improve readability. I suggest the following:
- Start with a very brief description of the seasonal cycle of rainfall in EA and what drives this seasonal variability. This provides useful context for readers who may be unfamiliar with the region and justifies the focus on the SON season.
- Keep the description of mechanisms limited to SON, which is the season examined in this study. Although the current introduction summarises a wide range of relevant literature, reviewing mechanisms across all seasons may distract from the primary objective.
- In line with my previous point 1, emphasise the gaps that remain in the existing literature and explain how the objectives of this study will help address them. It is fundamental to make very clear why this study is necessary, and this is not clear in the current version of the manuscript.
- This study uses seasonal forecast data at lead month 0 (lead-0). I am unsure that this is standard practice in the analysis of large scale drivers or teleconnections. At lead-0, forecast skill will exhibit a substantial influence from atmospheric initial conditions and short-range predictability, while for lead-1, lead-2… the role of the ocean as a predictor becomes more important. For instance, Fig. 2 shows that lead-0 forecasts exhibit higher ACC than lead-1. However, it is unclear whether this increase in skill is due to better representation of teleconnections and large-scale drivers at lead 0, or whether it primarily reflects the influence of atmospheric initial conditions. Hence some results should be interpreted with caution. Moreover, SEAS5 forecasts initialised in September are not available at Copernicus CDS until 6 September (10 September for the rest of the seasonal models available at CDS), i.e. when part of the month has already passed, thereby limiting practical applications for climate services. For these reasons, I think that lead-0 seasonal forecasts are probably not the most suitable choice for the purposes of this study, unless the authors provide convincing arguments for their use.
- Section 4, on the large-scale drives, currently reads as if it were a standalone study. Integrating it a bit with the preceding discussion would improve the cohesiveness of the manuscript. How do the differences between model and observations on the physical mechanisms relate to the forecast skill and the ability to reproduce observed precipitation patterns in SW and WY?
- Regarding the statistical significance of the results: the authors should be aware of the correction of p-values due to multiple testing. Each time an hypothesis test is carried out, there is a small — albeit non-negligible — probability of erroneously rejecting the null hypothesis. If just one test is carried out, this is not an issue. However, an enormous amount of tests are carried out when evaluating significance over a latitude-longitude grid, and consequently a number of erroneous rejections will arise and statistical significance is often overstated. Please see Wilks (2016) for a description of the problem and how to take into account test multiplicity. The authors should either:
- Take into account the multiple testing problem and correct the p-values in order to limit the false discovery rate.
- Keep the evaluation of statistical significance as-is in the current manuscript, but acknowledge in the Methods section that the correction of p-values due to multiple testing was not addressed. Figure captions should be adjusted as well, e.g. “the stippling occurs where X is locally significant at the 95% confidence level through a Student’s t test” (i.e. emphasise that significance was evaluated just locally). The discussion should accordingly reduce the emphasis placed on significant results.
Wilks (2016): https://doi.org/10.1175/BAMS-D-15-00267.1
Specific comments
- Regarding Figure 1:
- It is confusing to use different plot types for model lead-0 and lead-1 in Fig. 1a. Please use the same plot type for model data in order to allow a clearer model-observations comparisson.
- In Fig. 1a, the difference with model lead-1 is striking. In fact, I retrieved the SEAS5 data from CDS and tried to reproduce the same plot and found no lag between model lead-1 and model lead-0. There might have been an issue when selecting the lead time or plotting lead-1 data. Please check this.
- The authors indicate in the Methodology section that SEAS5 data from the August and September initialisations are used. However, in Fig. 1, model data throughout the year are represented. I suppose that more initialisations apart from August and September were used, but this is not specified in the manuscript. Finally, in Fig. 1c and 1d, how is the total annual precipitation computed? Is it from model or observations? Please clarify in the text.
- In the discussion of Fig. 6, the authors state “The SEAS5.1 captures these relationships reasonably well at both L0 and L1, but overestimated the correlations, …”. I do not agree with this statement since it cannot be derived from the data shown in Fig. 6. When the ensemble mean is computed, part of the high-frequency internal variability is filtered out and the part of the signal that remains is mainly associated with the lower frequency forcing and boundary conditions, in this case mainly from oceanic sources of predictability (IOD and ENSO). Conversely, this filtering is not present in the observations, and thus care should be taken when discussing the differences between model and observation. Hence, the fact that correlations with SST indices are higher in SEAS5 compared to observations may well be an artefact arising from using ensemble mean data. In order to assess whether there is a true overestimation of the correlation in SEAS5 related to some model deficiency or bias, the authors could compute correlations for the individual ensemble members. Comparing the observed correlation value with the distribution of correlation values from the individual ensemble members provides a robust framework to assess whether there is a systematic overestimation in the model.
- In the Methods section, it is explained that ERA5 data are used for the evaluation of the physical mechanisms. However, ERA5 precipitation data are represented in Fig. 8 and Fig. 9 and there is no mention of or discussion about ERA5 precipitation in these figures. Could you indicate what is the purpose of using ERA5 precipitation? If it is for validation with the CHIRPS database, there should be at least some sentence about it in the discussion.
- In case that this study differs from the previous literature in that it uses version 5.1 of SEAS5, I think it would be convenient to briefly explain the main differences between version 5.1 and the previous version when the model is presented in the Data and Methods section.
- The analysis of composites of extreme events begins rather abruptly, moving immediately into the discussion of Figs. 7 and 8. Instead, it would be convenient to add a paragraph that serves as a link between the preceding discussion and the subsequent analysis of extreme events. This paragraph would be also useful to emphasise the motivation for the study of extreme rainfall events, which is not stated in the current manuscript. What are the main objectives of the analysis of extreme events?
- Figs. 6 and 8 seem to suggest an asymmetry in the teleconnections to EA rainfall. For instance, in Fig. 6 it appears that if only SST < 0 values are considered, the correlations with ENSO are not significant, while for SST > 0 the positive correlations become more apparent. Fig. 8 appears to confirm this, in the sense that the weak rainfall years composite is not the exact opposite pattern to the strong one. In fact, SST anomalies over the ENSO region and Indian Ocean are weak and generally not statistically significant in the weak rainfall years composite. However, it is not until the discussion of Fig. 11 that these differences in the magnitude of the anomalies are mentioned. I think the apparent asymmetry should be discussed earlier.
- Lines 545-547: I do not agree that the patterns are “strong opposite” looking at Fig. 10. This is in fact the lack of symmetry in the teleconnection I was mentioning in my previous comment.
- The results from the study by Tefera et al. (2025) can also be cited in lines 412 and 436.
- Line 118. I could not find in the references the study by Tanessong et al. (2025). Do you mean Tanessong et al. (2024)? The SEAS5 model is not used there.
- I suggest that Fig. 9 is moved to Supplementary material, as its discussion is very short and it serves as a confirmation of the previous findings.
- Line 204: Could you indicate what interpolation technique was used?
- Although it is stated later on in the manuscript, please indicate in the Methods section that the N34 and DMI indices used are standardised indices.
- Line 593: I think that you mean the western part of EA, not eastern.
- Line 895: This reference follows a different format compared to the rest. Please ensure consistency.
- The subpanels of Fig. 6 (a, b and c) are not labeled.
- Colourbar units are missing in Figs. 4 and 5.
Minor language, formatting and/or consistency corrections
- Line 104: Replace “equatorial Africa” with EA (abbreviation already defined). Check throughout the manuscript.
- The current manuscript mixes British spelling (e.g. “organised” in Line 164, “standardised” in Lines 468 and 479) and American spelling (e.g. “normalizing” in Line 270, “characterized” in Line 629). Please check throughout the manuscript and ensure consistency.
- Line 328: “...strength of SEAS5.1 to simulated SON rainfall…” does not sound correct. Please clarify this sentence.
- Lines 371-372: “... internal variance is dominated by the external variance”. This statement may be misleading, as if the external variance was part of the internal variance. I suggest replacing it with something like “... external variance outweighs the internal variance…”.
- Line 455: “... ENSO has an indirect effect through IOD conditions, …” (add “effect”).
- Line 469: Typo: “periode” is “period”. Please check throughout the manuscript as this typo appeared several times.
- Lines 472-473: The sentence starting with “The criteria…” is grammatically incorrect. Please revise it.
- Line 484: please correct “years capture” to “years captured” (add the d).
- Line 489: replace “weak column” with “second column”.
- Line 546: a comma is missing between “Atlantic” and “Indian”.
- Line 584: I suggest replacing “positive and negative” with “strong and weak”.
- Line 590: replace “underestimate” with “underestimation”
- Line 620: this phrase reads better if re-structured “... over which the AEJ components (black dashed contours) at 15° E, and specific humidity (red contours) calculated between 10°E and 30°, are overlaid”.
Citation: https://doi.org/10.5194/egusphere-2025-2656-RC2
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 2,022 | 201 | 17 | 2,240 | 34 | 29 | 36 |
- HTML: 2,022
- PDF: 201
- XML: 17
- Total: 2,240
- Supplement: 34
- BibTeX: 29
- EndNote: 36
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Review of paper titled ‘Assessing the ability of the ECMWF seasonal prediction model to forecast extreme September-to-November rainfall events over Equatorial Africa’ by Nana et al.
Review by Indrani Roy
This paper focuses on rainfall predictability over Eastern Africa for September to November by exploring ECMWF-SEAS5.1 data during 1981-2023, Using regression, spatiotemporal and composite analyses, the authors studied extreme precipitation events and atmospheric circulations. Two lead times are used for initial conditions (IC) eg., September and August, while better skill is noted for September IC in terms of annual precipitation cycle and seasonal spatial pattern. Teleconnection between rainfall and ENSO, IOD are captured well for both ICs. Certain areas of underestimation are also identified. Results have implications for improved operational forecast and I recommend a revision.
Main points:
1. In Table 1, there are only two years for WY in L1. Mention that significant results are obtained using only two years. Similarly for SY, there are only four years for L1. Discuss briefly whether a lesser number of years has any influence on the figure that you showed in Fig. 8 (e-h).
Also in Fig. 7, some years could be identified as SY in models (2015 for both L0 and L1, 2002 for L1) or WY (1984 for L1,1996 for both L0 and L1, 2021 for L0 and 2022 for L1) but were not captured in the observation. Were those years included in Fig. 8 (e-h)? Discuss those. How does the inclusion and exclusion of those years affect the results and regions with significant signals?
In Table 1, did you check if ERA5 is also showing the same SY and WY as CHIRPS? If ERA5 is included in Fig. 7, some borderline years (eg. 1994) or other years could be different. Hence, caution should be taken in sampling the years of SY and WY parts. ERA5 data are used in all analyses of mechanisms.
2. As Fig.9 shows there are differences between CHIRPS and ERA5, it is better to include ERA5 in Fig.7 as well as in Table 1. You included composites of SY and WY in Fig.10 for ERA5 too, but those years are chosen using CHIRPS. However, SY and WY of CHIRPS and ERA5 may differ based on your selection criteria of the threshold. As the sampling years are very few for observation, addition or subtraction of one or two years can make a difference.
To overcome such issues, you might consider years where both CHIRPS and ERA5 identify the same SY and WYs. Thresholds of 1 SD can also be adjusted. All the results of compositing that you presented could still be similar; however, The results and discussion will be much robust.
3. Caution should be taken linking any mechanisms involving the Atlantic part. Those are not very clear in the current analyses.
Line 532- 533: No significant influence from the Atlantic Ocean is seen for SY years in observation/reanalyses or models. For WY, some influence is present, but models overestimate observation/reanalyses. Also, for ERA5 it is nominal and for CHIRPS it is not from the ‘eastern equatorial Atlantic ocean’. Mention those. In Fig.10, for WYs, the SST signals in box regions are practically missing in observation/reanalyses and L0; discuss that part. It indicates the asymmetric influence in WY compared to that from SY.
Line 569: Signal in the equatorial Atlantic for SST is not significant. Also, there is no signal there in Fig.11 (a, c, e, f).
Minor points: