the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Brief communication: Anthropogenic aerosol forcing of European windstorms in CMIP6 climate models
Abstract. A recently developed set of historical storm reconstructions, which were extensively validated by insurance loss data, revealed how European windstorm damages were three times higher in the 1980s and '90s compared to a few decades before and since. A better understanding of these slower fluctuations could improve how this costly risk is managed. Here, we explore the impacts of anthropogenic aerosols (AA) on European property damage using results from DAMIP (Detection and Attribution Model Intercomparison Project) climate model experiments. Multimodel mean DAMIP results indicate AA boosted European wind losses by 45 % in the late 20th century relative to preindustrial times, with the signal varying from zero to 100 % between the six models. A review of results from previous climate studies suggested the signal is more likely to be at the higher end of this range, though significant uncertainties remain. The results indicate AA forcing could have been a major driver of recent multidecadal changes in European windstorm losses. Further research into observational and modelling uncertainties would benefit those exposed to this risk.
- Preprint
(1545 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 04 Apr 2026)
- RC1: 'Comment on egusphere-2025-6232', Anonymous Referee #1, 26 Feb 2026 reply
-
RC2: 'Comment on egusphere-2025-6232', Anonymous Referee #2, 24 Mar 2026
reply
Summary
This study investigates whether variations in anthropogenic aerosol (AA) forcing may explain the observed multidecadal variations in property losses caused by European winter storms during the past decades. It is motivated by the result of a previous published study of the author, showing that European windstorm losses exhibited a notable positive anomaly during the 1980s and 1990s. The present study uses six CMIP6 climate models (with several ensemble members per model, in total 62 simulations) to isolate the effect of AA forcing on European windstorm losses, by comparing control simulations (with all external forcings set to preindustrial levels) to simulations with historical (1850-2014) AA forcing. A simple but well-established diagnostic for windstorm losses is applied to the wind data generated by the models, and differences between multidecadal averages of the losses (‘AA forced’ minus ‘control’) are investigated. The manuscript concludes, that AA forcing could have been a major driver of the observed positive anomaly of European windstorm losses during the 1980s and 1990s. To my understanding, this is the main conclusion of the manuscript.
Although the motivation and overall approach for this study are meaningful and interesting, the manuscript does, from my perspective, not present sufficient evidence for the main conclusion. Moreover, the Introduction section needs to be improved regarding terminology and structure, and also the description of the methodology requires some clarification. Details are given by the following comments.
Major comments
(1) The main question to be answered by the present study is: Does AA forcing explain the observed positive anomaly in windstorm losses during the 1980s and 1990s? (Or at least: to which extent does it explain the anomaly?) However, this question can only be answered with a climate model that does actually simulate the observed anomaly. In particular, since the observed anomaly occurred under the full historical forcing (not just AA forcing), the above question can only be answered with a climate model that simulates the anomaly under full historical forcing (the so-called CMIP6 ‘historical’ experiment). Then we can ask: Is this simulated anomaly driven by AA forcing or by another external forcing or may it have occurred by internal variability?
Unfortunately, the manuscript does not provide any information regarding this point. In my opinion, the most straight-forward, and probably easiest, way to address this issue, would be to repeat the analysis for the corresponding model simulations with full historical forcing. Then the obtained time series of the (low-pass filtered) windstorm losses could simply be added to those obtained with AA forcing only, already shown in Figure 2.
I also suggest to compute the corresponding time series from the ERA5 reanalysis, as there the observed positive anomaly in windstorm losses should definitely be visible.
One could perhaps create a figure with six panels, one for each model. Each panel could show the time series of: the individual ensemble members and of the ensemble mean under AA forcing only, the same (in another color) under full historical forcing, and the ERA5 time series.
This may help to judge the realism of the individual models in simulating the anomaly, to be explained by this study – in terms of both: magnitude (expressed as relative change) and timing.
It may turn out that some of the six models do not simulate the observed anomaly at all, in which case those models cannot be used to explain the occurrence of the anomaly, simply because it does not exist in those “model worlds”.
On the other hand, if all six models do actually simulate the observed anomaly to a sufficient degree of accuracy under full historical forcing, and if the AA forcing only simulations reproduce that anomaly (to some extent), then this would be an explicit indication for AA forcing being responsible for the observed anomaly (to that extent).
(2) The terminology and structure of the Introduction section are rather confusing.
First, the four possible drivers (lines 23-24) are ordered in a counter-intuitive way. The order is: volcanic eruptions, internal variability, solar variations, AA forcing, so the first is an external forcing, then follows internal variability, and then two external forcings again. I think, it makes more sense to first list the possible external forcings and then mention internal variability. The subsequent paragraphs repeat this counter-intuitive order, so they should be reordered accordingly. Moreover, the paragraph from line 34 to line 47 obviously is about internal variability, but line 45 mentions the solar cycle which is represents an external forcing. In addition, it is not entirely clear to me how the concept of internal variability is used in this manuscript.
Second, internal variability is called a ‘driver’ in the manuscript. This may make sense if properly defined, but as such a definition of the term ‘driver’ is not provided, it may be misunderstood as something similar to a forcing. In general, in my opinion, the terminology is sometimes imprecise throughout the manuscript and I really suggest to sharpen the terminology and wording.
Third, I am not sure whether all those rather lengthy explanations in the Introduction section are really needed. If condensed to what is really needed for this manuscript, some of my above caveats may already disappear.
Specific comments
(3) Line 105: Is the 98th percentile computed from October to March only or from all months of the year? And is it computed from all years of the control simulation (i.e., from several hundreds of years for each simulation) or from a shorter reference period? And is it computed from each ensemble member separately or from all members together (for a specific model)?
(4) Line 109: What does “of up to three days” mean? This suggests the series may in some cases, that is, for some storm events be shorter than three days. It that true and, if yes, in which cases does that happen?
(5) Line 127: “…is computed using the full time series…” What does “full” time series mean? Does it mean “unfiltered” or does it mean “full length” (i.e., several hundreds of years) or does it mean both?
(6) I am surprised that the t-test is explained in the Results section, rather than the Data and Methods section. Also, because the standard error is computed from the control simulations, which have constant external forcings, this error precisely represents/quantifies internal variability. This could be mentioned together with the description of the t-test.
(7) Line 217: “These tabulated values contain a pattern…” As I understand this, the “pattern” refers to the relation between the strength of the AMOC changes and the magnitude of European windstorm loss changes. And this relation is later used to construct some arguments. However: Is there any indication that this relation is either statistically significant or at least physically plausible (i.e., it does not appear just by chance)?
(8) Line 227: “…the magnitude of modelled AA forcing is smaller than best estimates of observed…”; versus line 231-232: “…it is possible that the models studied here have too strong AA forcing…” – which sounds contradictory to me.
Citation: https://doi.org/10.5194/egusphere-2025-6232-RC2
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 222 | 105 | 14 | 341 | 36 | 41 |
- HTML: 222
- PDF: 105
- XML: 14
- Total: 341
- BibTeX: 36
- EndNote: 41
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Summary: The manuscript analyses ensembles of simulations from five general circulation models covering the period 1950-present, driven solely by changes in anthropogenic aerosol (AA) forcings. The goal is to identify the AA's signature in the economic losses from high winds across Europe. These simulations are compared with control simulations, with constant external forcing set at preindustrial levels.
The main conclusion is that the AA signal in wind-related losses is clear, although the exact mechanisms are hampered by high model spread.
Recommendation: In my opinion, the tesseract question is interesting, and the results could be useful, despite the large model spreads. However, I also see several aspects in the manuscript that need clear improvement. My main concerns include the statistical analysis and several paragraphs concerning the description of internal and externally forced variability and the role of the AMOC, which, in my opinion, are quite unclear.
Main points:
1) Basically, the only statistical analysis conducted to identify the impact of AA on wind-related losses is a t-test to assess changes in the mean between two periods. I have several issues with this approach:
- The AA is not constant in time, and so the impact of AA on wind-related losses should display a comparable time evolution. I am aware that the complexity of the mechanisms involved, including the purported impact of AA on the AMOC and its feedback on storminess may involve some temporal lag, but from the model time series shown in Figure 2b only one model shows a time evolution can be roughly compared to the time evolution of the short-wave downwelling radiative forcing shown, for ins125)tance, in Hassan et al. The SWR displays a very clear maximum in the last 2 decades of the 20th century, but the time series of losses shown in this manuscript (Figure 2b) does not show, by far, this type of behaviour, with the possible exception of the model CanESM5.
Therefore, I think a more sophisticated statistical methodology should be applied to identify the AA signal in wind-related losses. A stationary t-test is not enough
2) The way to conduct the t-test is unclear. The data description states that the time series of wind-related losses has been smoothed by a -20year Butterworth filter ‘to highlight the decadal variations in the results ‘(line
125). Does this mean that the t-test has been applied to the smoothed data? Also, the ensuing paragraph seems to indicate that the standard deviations of the series, needed to estimate the ‘separation’ of the mean values, have been calculated using the smoothed values. If this is the case, then the calculation is not correct, because the number of degrees of freedom (the sqrt(N) by which the standard deviation is divided ) is not the number of time steps, but much less. Otherwise, one could artificially inflate the precision of the mean estimate, but smoothing the original data will. This needs to be clearly explained.
Particular points
3) The explanation of what constitutes internal variability is not correct and should be thoroughly revised. For instance, the text states that ‘ Climate models consistently produce slower climate variations in the North Atlantic sector in the absence of all external forcings, which is referred to as internal variability. This sentence is misleading. First, internal climate variations are produced by models forced by constant forcing (not the absence of forcing); secondly, internal variability occurs at all time scales (from second-scale turbulence to millennial-scale ocean currents.
‘This multidecadal driver can be broken down into two different components.’.
Internal variability is not a ‘driver’, and this sentence may lead to confounding internal variability with external climate drivers.
‘The first concerns the Atlantic Meridional Overturning Circulation (AMOC’.
The AMOC is a term that describes a three-dimensional climate system. It is not ‘internal‘ or ‘external’ variability per se. Actually, the manuscript later describes the impact of an external forcing (AA) on the AMOC. This paragraph is really not well-structured and sounds superficial.
‘ The second component is atmosphere-based, and consists of shorter-timescale extremes of sufficient magnitude to alter multidecadal averages of storminess’.
Again, the atmospheric variability may be internal or externally forced, at all time scales, not only the slow variations. Although high-frequency internal variability is also reflected in low-frequency variability (through sampling variability), some atmospheric processes generate intrinsic low-frequency internal variability.
Next, the manuscript cites EL Niño as an example of atmospheric internal variability, whereas it is very well known that ENSO arises through the coupling of the Tropical Ocean with the Tropical atmosphere.
There are many other aspects in this discretion that need revision. I really hate to be harsh, but this part of the introduction, in my opinion, needs a complete revision.
In addition, this part of the introduction is really not necessary for the manuscript's goal. I interpret the author as suggesting that AA may directly affect the atmospheric circulation, and that the AMOC, in turn, modifies the meridional heat transport and thus the atmospheric circulation. Whereas the direct path is very rapid, the AMOC path may display a delay of several years, due to the sluggish response of the deeper ocean. But this is only my interpretation of what the author wish to say after reading the whole manuscript. The issue of internal or externally forced variability is, in my opinion, just steering the flow of the text from the relevant messages, even if it were properly phrased.
4) ‘Observed multidecadal changes in European storm losses align with the AA-forced signals’
Is there a reference for this? I really doubt that this sentence, as written, can be correct, since wind-related losses would be primarily affected by GDP and/or population growth. Does the author mean ‘normalised’ losses? Beyond socio-economic factors, is there really evidence that AA is the main physical factor affecting wind-related losses, given the large internal variability of the atmosphere? If the sentence were correct, it would imply that atmospheric variations are almost entirely driven by aerosol forcing, which is not true. This sentence really requires a very solid backup and very careful phrasing (what does ‘align’ mean exactly?).
5) In brief, sulphates have produced the largest radiative forcing in the industrial period, growing from a relatively small amount at the start of the 20th century to peaks in the 1980s and 1990s'
I guess the author means the largest *changes* in short-wave anthropogenic forcings. The largest radiative forcing is by far the sun. This is an example of inaccurate writing that can be found elsewhere in the manuscript.
6) The contribution of socioeconomic factors to the wind-related losses is taken into account by multiplying the physical (wind) factors by the population in a particular grid-cell (equation on page 4, equations are not numbered!) . It seems the population is considered constant over time (?). This would be quite unrealistic, as population and GDP growth over time would, I think, increase the exposure to wind extremes and thus dominate any trends in wind-driven losses. It would be rather easy to include population or GDP growth. Is there a reason not to do so? This point should at least be discussed.