the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The ACCESS-AM2 climate model strongly underestimates aerosol concentration in the Southern Ocean, but improving it could be problematic for the modelled climate system
Abstract. The interaction of natural marine aerosol with clouds and radiation is a significant source of climate model uncertainty. The Southern Ocean represents a key area to understand these interactions, and a region where significant model biases exist. Here we provide an evaluation of the Australian Community Climate and Earth System Simulator atmosphere model which includes a double-moment aerosol scheme. We evaluate against condensation nuclei (N10) and cloud condensation nuclei (CCN) from seven ship campaigns and three terrestrial locations, spanning the years 2015–2019. We find that N10 is heavily underestimated in the model across all regions and seasons by more than 50 % and in some cases by over 80 % at higher latitudes. CCN is also strongly underestimated over marine and Antarctic regions, often by more than 50 %. We then perform seven sensitivity tests to explore different aerosol configurations. We find that updating the dimethyl sulfide climatology and turning on the primary marine organic aerosol flux marginally improves marine CCN by between 4–9 %. N10 however was reduced by between 3–9 %, resulting in worse model performance. The Southern Ocean radiative bias is also reduced by this combination of changes, with limited adverse effects. We also test altering the sea spray flux to use wind gust instead of mean wind speed, which significantly improved CCN in the marine regions, but resulted in detrimental impacts on the radiation budget. Our results indicate significant problems in the model’s microphysical processes and with over tuning. We suggest this needs to be addressed in a holistic way.
- Preprint
(23658 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 02 Dec 2024)
-
RC1: 'Comment on egusphere-2024-3125', Anonymous Referee #1, 14 Nov 2024
reply
The authors test the ability of an aerosol scheme, within an atmospheric model, to simulate Southern Ocean aerosol concentrations and find it lacking in skill, compared to measurements from multiple in-situ campaigns. Implementation of several structural model changes lead to minimal increases in model skill, or regional/seasonal skill increases that are offset by skill reductions in other regions/seasons. The authors suggest a more holistic approach to model development is needed that accounts for multiple aerosol and possibly microphysical process representations simultaneously. Additionally, the authors highlight the need for more aerosol size distribution measurements with which to evaluate the effects of model developments on Southern Ocean aerosol and clouds.
Although the research is informative about potential model development priorities, and could thus make an important contribution to the literature, some aspects of how the article is written need to be addressed before publication:
Title and sub-titles:
The 2nd part of the title is not clearly supported by the results and should be changed to better reflect the paper content.
Sub-title of section 3.4.2 is incorrect. The content describes data processing rather than statistical methods.
Vague language:
Language is imprecise in parts of the article, leaving the reader to guess the author's intended meaning. The discussion section stands out as particularly vague. One key example: in the first paragraph of section 6, the phrase ‘as a whole’ is unexplained, yet this seems to be the primary recommendation of the paper. The authors need to take the time and space to frame their hypothesised model development framework in more detail and with greater clarity to be convincing.
Other examples of vagueness in language that need to be addressed include:
In section 3.1, the description of the model indicates the ACCESS model was run in atmosphere-only mode. So, the model is essentially UM10.6 GA7.1 with GLOMAP-mode, using CMIP6 and CEDS emissions, nudged towards ERA5 data. The coupled aspect of the model seems irrelvant yet ACCESS is framed as the model being evaluated. What is ACCESS actually adding to the simulations?
Line 198 to 202: This meaning of this paragraph is hard to unravel. Clearer language is needed.
Line 304: GLOMAP-mode provides these values, though they may not have been selected by the authors. This description needs to clearly state this was a choice, rather than a model deficiency.
Figure 3 caption: 25th and 75th percentiles of what?
Line 340: Standard deviation of what? Calculated from which data?
Line 463: The meaning of the final sentence is obscure and unreferenced.
Constraint/constrained is used incorrectly in the article. I think the authors mean ‘restricted to’, ‘in’, or ‘limited to’. Constraint has a specific meaning related to model uncertainty.
These is extensive use of accronyms, which may be considered appropriate for some readers, but reduces readability. Particularly, readability is reduced by using accronyms for observation stations.
Presentation of results with more obvious scientific reasonining:
Much of the manuscript needs to be rewritten to highlight key discoveries to the reader. The authors should consider where meaning is assumed and could be clarified. Additionally, the text often contains only statements about model-to-observation comparisons, without interpretation of meaning. Occassionally, statements conflict with results, which suggests they've not been considered deeply.
Some essential changes include:
Section 3.4. The first paragraph here is unneccessary. Nothing of value is added, so this should be removed.Line 335: First sentence is confusing.
Paragraph starting line 360: In this location, the NPF scheme test is the only sensitivity test to shift the model from biased low to biased high. Some interpretation of results is needed here. This result implies the persistent model bias might be partially overcome by implementing a more sophisticated NPF parametrization. Also, the simplicity of the NPF parametrization needs to be mentioned in this section to help the reader understand why the improvements are spatially restricted.
Section 4 has many long-winded descriptions that do not lead to insights or statements of how the results affect model interpretation.
Section 4.3: There is no mention of the BL NPF sensitivity test here, even though it is the only test to reduce activation ratios.
Line 563: The meaning of the first sentence is incongruous with the results.
Missing detail and context:
As mentioned above, some sections are heavy with text, whilst others lack detail and critical information.
For example, where the Humphries data set is introduced, no context is provided for why it might be better, or more useful, than previous data sets. Furthermore, some sense of the motivation for including the specific sensitivity tests chosen would be extremely useful in the first paragraph of section 3.2. Other specific examples of missing detail/context include:
Line 207: Why are time-varying DMS datasets preferable? Need to say what value is added.
Line 222: ‘underway’ needs a description
Line 317: Why isn’t the assumption made that the gridbox containing the observation would be the best comparison to make? There is no explaination for why the authors are even considering using a gridbox the the SW of the station.
Line379: The authors state ‘This is a key area of development for GLOMAP-mode’. This statement needs to be put in context. The suggested model developments are only important if the priority is a model with increased skill at simulating aerosol concentrations over remote polar regions. The authors have assumed this is the case, with an implied further assumption that aerosol in these regions are more climatically important than aerosol elsewhere.
Section 4.1: Some brief description of the overall under-prediction of CCN concentration and seasonal cycle amplitude, and what this implies should be given up front. Currently, this message is hidden amongst discussion of individual simulations.
Old model version:
The authors have evaluated the impact of structural changes to model parametrizations, using a relatively old version of the GLOMAP-mode aerosol scheme, without reference to published model changes that would affect results. Results in this article need to be discussed with reference to latter model versions and with some consideration of how recent model developments may impact results presented here.
For example, no reference is made to the inclusion and evaluation of primary marine organic aerosol in later model versions. Additionally, sea salt density has been updated, as has deposition velocities via land surface representations, both of which would affect the sensitivity test results.
Figures:
Font size in figures is sometimes too small. Additionally, thicker lines with better color contrast, or some other method, is needed to distinguish between simulations.
On line 511, the authors state they have evaluted other cloud properties, which is essential to make a complete analysis of the impact of the sensitivity tests on aerosol, clouds and aerosol-cloud interactions. Equivalent figures should be included in a supplement, so the reader can interpret the wider effects themselves.
References:
Some additional references that have been overlooked include:
Schutgens et al. (2017) doi.org/10.5194/acp-17-9761-2017 in section 3.4.1
Additional literature evaluating more sophisticated NPF schemes and the climatic effects of those schemes.
The final sentence in section 4.2
Model structural changes implemented after this model version, particularly where they may affect interpretation of results here (e.g. https://gmd.copernicus.org/articles/13/6383/2020/)
Spelling and syntax:
- Line 101: ‘(GLOMAP)’ over-used
- Line 149: ‘volcinic-sourced’
- Equation 7: Numerator should be ‘CHL’
- Line 261: Remove ‘/,’
- Line 290: SYO not defined
- Line 310: inline
- Section 3.4.2: This could easily be a single paragraph
- Line 353: missing ‘is’
- Line 605: bis
Citation: https://doi.org/10.5194/egusphere-2024-3125-RC1 -
RC2: 'Comment on egusphere-2024-3125', Anonymous Referee #2, 16 Nov 2024
reply
General comments
This manuscript compares simulated Southern Ocean (SO) aerosols in the ACCESS-AM2 Earth System Model’s (ESM’s) atmospheric component to in situ SO aerosol observations. While ESMs’ representation of SO aerosols is highly uncertain and important for future climate prediction, historically, limited in situ observations of aerosols in the SO have made evaluating ESMs’ SO aerosol representations difficult. This work represents a substantial, novel contribution to scientific progress within the scope of ACP because it presents a first ESM-observation comparison in the SO across latitude and season. Before this manuscript can be accepted for publication, additional context needs to be provided on the significance of differences between the simulated and observed aerosol properties (described in detail in major issue 1 below).
Major issue 1: additional context needed on significance of model-observation differences
At present, the manuscript text presents a large number of percentage differences between observed and simulated SO aerosol properties. The only context for the significance of these properties comes from the figure shading around 25th-75th percentiles for the control simulation (not of the other 7 simulations) and observations. From only this information, it is very challenging for the reader to evaluate whether these percentage differences are significant and to assess if the first part of the manuscript title (“The ACCESS-AM2 climate model strongly underestimates aerosol concentration in the Southern Ocean”) is justified. I outline below a few different sources of uncertainty that are missing or only mentioned qualitatively in the present text.
- Point measurements vs grid cell means: The manuscript does not appear to address at all that it is comparing observed point (or near point when integration time is a few minutes) measurements to grid cell mean simulated measurements. This could induce large biases depending on the shape of the sub-grid scale distribution of aerosols, especially for the coastal sites. Some context on the sub-grid scale distribution could be gained from plotting distributions of measurements near in time and space and assessing skewness.
- Instrument and instrument simulator uncertainty: The manuscript is missing context on instrument uncertainty. Further, for the CCN comparison, the manuscript should provide uncertainty estimates due to interchanging a super-saturation-based threshold for the observations and a size-based threshold for the model. This translation will be uncertain due to assumptions needed to translate particle activation supersaturation to particle activation size (as is somewhat mentioned presently but not made quantitative) and due to imprecision in supersaturation control in the instruments. Ideally this uncertainty could be propagated through to the reported model results, but if this is too difficult, sensitivity tests of different radius thresholds should be reported. This latter test seems straight forward from the code in process_aer_along_ship.ipynb. Additionally, some of campaigns (e.g., MICRE) have both 0.5% supersaturation aerosol count and aerosol size distribution measurements. A comparison of the two CCN methods can be made directly on the observations and its error assessed.
- Variability of non-control model runs: The manuscript should comment in some form on the variability (25th-75th percentile) of the non-control simulations. Ideally this could be visualized in Figures 3-4 (though this might be too visually confusing) or the supplemental.
- Exhaust corrections: Some context on the magnitude of these corrections on the observed N10 and CCN at different sites/ships should be given as well as the uncertainty introduced by these corrections.
Specific comments
Abstract lines 13-14: “Our results indicate significant problems in the model’s microphysical processes and with over tuning.” Given how the manuscript is currently written, I believe this sentence should be rephrased to reflect that this is the authors’ opinion. This statement is one possible interpretation of the manuscript’s results. I see only one paragraph of the present manuscript referencing the tuning issue and only tangentially. (See Lines 606-611 for more detail.)
Abstract line 14: “We suggest this needs to be addressed in a holistic way.” I only see part of one sentence explicitly supporting this abstract statement: “… [this] points to a need to consider model development in this space as an entire system rather than individual components.” I think this should have slightly more discussion to be mentioned in the abstract.
Lines 105-106: “By performing these evaluations [of N10 and CCN], the model biases associated with aerosol-cloud-radiation interactions around the Southern Ocean and Antarctic can be better understood and the degree of uncertainty reduced.” Another sentence or two connecting why these aerosol properties specifically (especially N10) would accomplish this would make the introduction a lot stronger. (Just a suggestion.)
Section 3.2: The current model configurations tested (Table 1) all focus on aerosol sources. What about aerosol sinks? Even if it is out of scope to test these components of the aerosol scheme, too, the text should at least mention how the model represents aerosol sinks (e.g., coagulation, rain out) and discuss how these processes’ parameterization might contribute to model-observation discrepancies.
Lines 299-301: “Some evaluation of the model[‘]s meteorology has been carried out, but is not shown in this work. It was found to be satisfactory, which is in line with our expectations due to nudging.” Can this be presented in a supplemental? With such minimal description, it is not clear what is “satisfactory.”
Line 316-317: “At KCG, we have used the exact model gridbox that the station is located in, as choosing a gridbox to the south- west of the station resulted in poorer performance.” This sentence seems to suggest at other locations the exact model gridbox was not used? If this is true, this conflicts with my understanding of Section 3.4.2 for a stationary location.
Section 3.4.2: Do ships cross multiple grid cells in a day? (Is the daily average lat-lon location problematic?)
Figures 3, 4: It is unclear exactly what “the monthly and annual median concentrations of N10/CCN” in the captions means, especially when compared to the methods description. Does this mean “monthly and annual median of mean daily concentrations of N10/CCN” or “median of monthly and annual concentrations of N10/CCN”? I assume the former from the methods, but this is not clear from the text. I am not sure whether the colors used are colorblind safe.
Figures 6, 8: It would be helpful to compare the different model configurations’ TOA SW radiative flux directly to the CERES product instead of plotting line contours where the control bias changes signs. Even just adding one panel with the zonal means would be helpful to support the manuscript’s conclusions about which configurations improve the model relative to observations. I find it very difficult to get this important information out of the current plots. (Just a suggestion.)
Figure 7: Since LWP isn’t compared to an observational product and is only mentioned as a key controller of TOA SW flux (already shown), this figure and its associated text don’t seem necessary to the main manuscript text and could go in a supplement.
Lines 511-512: “We note that we have evaluated other cloud properties, and the aerosol direct effect via clear sky radiation, but for brevity will not discuss them here.” This seems unnecessary to mention if these evaluations aren’t even roughly summarized and aren’t included in a supplemental.
Lines 606-611: This paragraph seems to motivate the second half of the title and the end of the abstract. It is not clear to me that these are unique conclusions from the methods presented and analysis shown. There is some discussion of why the experiments are probing what they are probing, but their implementations still contain such large uncertainties that it is unclear if scattered improvements in the simulated-observed CCN comparison demonstrates “improvement of the physical representation” of aerosols. This is why I suggest toning down the abstract above. Since the title includes “could,” I think it is acceptably couched.
Technical comments
Title: The antecedent for “it” is not clear from the title alone. The antecedent could be either “ACCESS-AM2 climate model” or “aerosol concentrations.”
All of the manuscript’s equations are presented at the end of paragraphs and are not interwoven into the text.
Citation: https://doi.org/10.5194/egusphere-2024-3125-RC2
Data sets
ACCESS-AM2 ship track data Sonya Fiddes https://doi.org/10.5281/zenodo.13864183
Interactive computing environment
ACCESS_aerosol_eval Sonya Fiddes https://github.com/sfiddes/ACCESS_aerosol_eval
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
182 | 95 | 11 | 288 | 3 | 2 |
- HTML: 182
- PDF: 95
- XML: 11
- Total: 288
- BibTeX: 3
- EndNote: 2
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1