the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Aircraft In-situ Measurements from SOCRATES Constrain the Anthropogenic Perturbations of Cloud Droplet Number
Abstract. Aerosol-cloud interactions (ACI) in warm clouds alter reflected shortwave radiation by influencing cloud microphysical and macrophysical properties. The variable of state controlling ACI is the cloud droplet number concentration (Nd). Here, we examine the perturbations in Nd due to anthropogenic aerosols (∆Nd, PD–PI) using a perturbed parameter ensemble (PPE) hosted in the sixth Community Atmosphere Model (CAM6). Surrogate models are created for the CAM6 PPE outputs and are used to generate 1 million model variants of CAM6 by sampling 45 sources of parameter uncertainty. The range of uncertain physical parameters related to ACI are constrained with observations of aerosol and cloud properties from SOCRATES. The likely range of uncertain parameters and the associated range of ∆Nd, PD–PI are more strongly constrained with observations of Nd relative to observations of cloud condensation nuclei. We conduct sensitivity tests of how constraints on ∆Nd, PD–PI are affected by systematic uncertainties in observations and our limitations in our surrogate models created for CAM6 PPE outputs. Based on this, we provide guidance on the impact of reducing systematic uncertainty in airborne microphysical observations and in surrogate models.
- Preprint
(6661 KB) - Metadata XML
-
Supplement
(2633 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
CC1: 'Comment on egusphere-2025-2009', Marc Daniel Mallet, 10 Jun 2025
The work the authors have done shows that accurate measurements of Nd (cloud droplet concentration) from SOCRATES over the Southern Ocean can strongly constrain global Nd perturbations due to anthropogenic aerosol. The authors conclude that while observations of CCN (cloud condensation nuclei) and Nd also provide a strong constraint, observations of CCN alone only provide a minimal constraint.
For this analysis, the authors used integrated particle counts above 100 nm from a UHSAS (N100) as a proxy for CCN, rather than direct measurements of CCN that were made during SOCRATES (Sanchez et al. 2021). The authors cite McCoy et al. (2021) stating that there is a 1:1 relationship between N100 and CCN for SOCRATES, but Figure S2b in that paper shows that N100 only explains ~half of the variance (r = 0.719) in CCN (at 0.2 % supersaturation). The last sentence of the current discussion emphasizes that the strong constraint provided by Nd measurements is only possible when these measurements are accurate. I therefore wonder what the impact on this analysis might be if the direct CCN measurements were used instead of N100. This type of study (which is very interesting) could be useful for guiding the planning and logistics for future field campaigns. The authors touch on these issues briefly in the current manuscript, but it would be useful to get clarification on the following:
- Is there a reason why the direct CCN measurements (either from the scanning or constant supersaturation CCN counters; Sanchez et al., 2021) collected during SOCRATES were unsuitable for this analysis?
- Given a correlation coefficient of 0.719 between N100 and CCN0.2 during SOCRATES (McCoy et al. 2021; Figure S2b), are the authors confident that using N100 as a proxy for CCN is sufficient to conclude that CCN observations alone can only provide minimal global constraint on Nd?
References
McCoy, I. L. et al. (2021). Influences of recent particle formation on Southern Ocean aerosol variability and low cloud properties. Journal of Geophysical Research: Atmospheres, 126(8), e2020JD033529.Sanchez, K. J. et al. (2021). Measurement report: Cloud processes and the transport of biological emissions affect southern ocean particle and cloud condensation nuclei concentrations. Atmospheric Chemistry and Physics, 21(5), 3427-3446.
Citation: https://doi.org/10.5194/egusphere-2025-2009-CC1 -
AC1: 'Reply on CC1', Ci Song, 10 Jun 2025
We thank the reviewer for their comment. Our study builds upon the framework of McCoy et al. (2021), which evaluated the default version of CAM6 using N100 and Nd from SOCRATES. Consistent with their approach, we use N100 as a proxy for CCN to enable comparison across multiple CAM6 simulations with varied parameter combinations. One advantage of using N100 is that it allows direct comparison of our perturbed parameter ensemble (PPE) results with those from previous studies (e.g., Figure 2).
We acknowledge that the correlation between N100 and CCN at 0.2% supersaturation during SOCRATES (r = 0.719; McCoy et al., 2021) indicates that N100 explains only part of the variance in CCN. However, the ratio between N100 and CCN at 0.2% is 1:1 (see Figure S2a,b, McCoy et al. 2021). Thus, the campaign mean N100 will be equivalent to the campaign mean CCN at 0.2% supersaturation. Because the campaign mean N100 is ultimately what is used as the constraint for this study (e.g., Figure 7b), it will be equivalent to using the campaign mean CCN at 0.2%.
Citation: https://doi.org/10.5194/egusphere-2025-2009-AC1
-
RC1: 'Comment on egusphere-2025-2009', Anonymous Referee #1, 10 Jun 2025
The authors use field measurements from the SOCRATES campaign to constrain a CAM6 perturbed parameter ensemble. With a primary focus on CCN and Nd, the authors use an emulator to create surrogate models and constrain plausible parameter combination. Lastly, the authors show how observational uncertainty affects the ability to constrain.
The paper is well written and the figures provide sufficient visual context. There are a few concerns that the authors should address before publication.
Major concerns
The authors stress in several places that the performance of the emulator is crucial for the task at hand. Looking at Fig. 7, the colors of points and shading strongly disagree in many places. I wonder if the authors have an explanation of why the apparent performance is so poor and whether a better emulator is needed (or even possible).
The authors largely leave out cloud macrophysical properties (e.g., cloud fraction, cloud geometric thickness, etc.). Does it go without saying that the nudged PPE runs produce plausible macrophysical properties? The authors should at least provide a brief assessment.
Along with the above concern, I am wondering if this study is limited to stratiform clouds (as no convective scheme is described in Sec. 2). Have the authors ensured that all clouds in CAM6 are stratiform over the SOCRATES domain? How would the results change if a substantial portion was handled by the convective scheme?
The authors use observed surface precipitation rates, but it is unclear where these rates stem from. The authors need to update Section 2 and describe the retrieval.
Minor concerns
l. 16 “possible” rather than “much easier”
ll. 105ff Please briefly describe the synoptic situation encountered during the flights.
ll. 265-266 Could a lower updraft speed also explain this?
Typo(s)
l. 453 “of without”
Citation: https://doi.org/10.5194/egusphere-2025-2009-RC1 -
RC2: 'Comment on egusphere-2025-2009', Marc Daniel Mallet, 11 Jul 2025
This is a review of ‘Aircraft In-situ Measurements from SOCRATES Constrain Anthropogenic Perturbations of Cloud Droplet Number’ by Song et al. It is an important study that warrants publication in ACP. The analysis is very in-depth and complex. Using Southern Ocean observations to constrain global cloud properties using a perturbed parameter ensemble is never going to be an easy thing to synthesize, and I did have to reread various sections to absorb everything. But I have learned a lot in the process and I think overall this is a great study.
I have a major comment, related to my public comment about the use of N100 as a proxy for CCN, as well as a number of minor and technical comments. I suggest major revisions because I think it is very important that the conclusion that “CCN” observations do not provide any constraint on global Nd is revisited. In reality, I do not think the major revision should take long to address.
Major comments:
The authors use observations of N100 (particle concentration above 100nm) as a proxy for CCN. They cite another study from the same observational aircraft dataset showing a slope of ~1:1 between CCN (at 0.2% supersaturation) and N100. The authors responded to my public comment regarding this, stating that it is the distribution mean that is used as the observational constraint. I understand the logic of this, but it should be noted that the ratio of campaign mean CCN0.2:N100 in McCoy et al. (2021) Figure S2 is ~1.08 (+/- ~0.3). If we take that average, that means the use of N100 is underestimating CCN0.2 by ~8%. This is a potential known systematic error that is not discussed (e.g. Lines 165 to 175). What impact would this have on the analysis for the observational constraint on the distribution of PPE emulated global PI - PD Nd? I think this could be roughly determined without needing to repeat the analysis with the actual CCN0.2 observational dataset. I am trying to assess the potential impact of this by looking at Figure 7b if the vertical grey shaded area (representing observed N100) was shifted to the right. It’s difficult without the raw data, but I think it would shift the distribution of plausible emulations constrained with only CCN observations of PI - PD Nd to higher values. It’s difficult for me to glean from the density contours and colour scale whether the distribution would be narrower or not. It’s clear that the Nd-only constraint will still be much stronger, but it might also slightly change the Nd + CCN constraint.The activation diameter for Southern Ocean aerosol is likely always going to be less than 100nm for supersaturations of 0.2%, and likely around 80nm for the aerosol population sampled during SOCRATES (e.g. see Fossum et al., 2018; Figure 2b, where mP air masses contain aerosol with similar characteristics to oceanic air masses south of Australia (Mallet et al., 2025, Figure 2b)). I suspect that the SOCRATES CCN0.2:N100 ratio of ~1.08 shown in McCoy et al 2021 Figure S2a is likely biased low due to some data points where this ratio is close to zero, which is physically implausible. I therefore think an 8% underestimation of CCN is conservative, and it would be interesting to see what the impact would be on Figure 7 and 8 and associated analyses if a ratio closer to the upper end of the error bar (~1.4) in McCoy et al., 2021 Figure S2 was used.
I recognise the huge amount of effort that’s already gone into this paper. I also recognise that it is likely that the N100 data is probably more readily available and processed for comparison to the model output. Ideally the whole analysis would be repeated for the actual QA’d CCN0.2, but that might not be feasible. At the very least I think the authors should test the impact that increasing the campaign mean observed N100 by 8% and 40% so that it aligns with the campaign mean observed CCN0.2. I think that an increase in the “observed” CCN by 8-40% would probably change the population of plausible PPEs and emulations, which would change the constrained parameter spaces and constrained PD - PI Nd distributions. This would impact Figure 3a/c, Figure 7b, Figure 8, and Figure 9, as well as many of the conclusion and discussion statements regarding the CCN constraint. If those two tests change nothing, then only some of the discussion might need expanding so that other readers with similar thoughts to me understand.
In response to my public comment about the use of N100 as a proxy for CCN0.2, the authors stated “we use N100 as a proxy for CCN to enable comparison across multiple CAM6 simulations with varied parameter combinations. One advantage of using N100 is that it allows direct comparison of our perturbed parameter ensemble (PPE) results with those from previous studies (e.g., Figure 2).” My understanding is that the CAM6 model used for the PPE work uses information from simulated aerosol to calculate an activation diameter for a particular supersaturation (I think from my reading the Abdul-Razzak & Ghan activation scheme with κ-Köhler is used). So it isn’t entirely clear why using observations of N100 is a direct advantage for model comparison over using actual CCN observations. Observations of N100 would be the better choice if CAM6 used a static 100nm activation diameter, but then the use of the term CCN would be misleading in this study and the conclusion would be that observations of N100 alone do not constrain estimates of global PI Nd.
This might seem nitpicking, but it does seem to me that a slight underestimation of CCN by using N100 as a proxy could change part of the outcome of the paper. The value of Nd observations over the Southern Ocean for global climate modelling has been thoroughly demonstrated here. If CCN observations from the pristine Southern Ocean only provide a constraint on global estimates of the impact of anthropogenic aerosols on cloud microphysics globally when collected in conjunction with Nd observations (which are arguably more expensive and logistically challenge to collect), then that might (and should) influence decision making and justification for future measurement efforts in the Southern Ocean. This paper is potentially quite impactful, so I do think it is worth revisiting the potential implications of using N100 as a proxy for CCN while considering even small differences in the unity of the relationship between these two.
Minor comments:
The use of CCN and Nd separately and combined to constrain the PPEs is interesting. Could the authors add a few sentences to the conclusion about future directions? Are there any other observable quantities that could also be used given enough measurements (e.g. aerosol composition, aerosol size, ice nucleating particles, cloud ice crystal concentration)? I could imagine a scenario whereby introducing more observational constraints severely limits or even eliminates the number of plausible PPEs/emulations, but I wonder if that in itself could be used to highlight structural issues within these models.There appear to be some large differences between the PPE members and the emulations. The other review raised this concern. I do not have the capacity to thoroughly review this aspect of the analyses, but I do think the authors have acknowledged potential limitations with the emulations and gone to lengths to consider the impact of the emulator uncertainty (e.g. Figure 8b, Figure 9).
Section 3.3.2 is quite complex. My understanding is that because there is a good relationship between the global Nd for the PI and PD (fig 4), it’s important to demonstrate there is a physically plausible reason for that in order to strengthen the following results/conclusions about the use of present-day pristine observations to constrain PI global Nd. I haven’t rigorously gone through the maths for this section due to time constraints, but the logic seems sound to me.
L247: Are these cases focused only on warm liquid clouds? I would have thought there was a significant occurrence of supercooled liquid in these flights.
Figure 4 and 5: The grey shaded area in Figure 4 is stated to represent the 95% confidence on the interannual range of global oceanic mean Nd from MODIS (visually, ~80 - 115 cm^-3). The black dot in Figure 5 also shows this, but the error bar along the y-axis expressed a much tighter range, despite the caption stating it represents the same thing as Figure 4.
Technical comments:
I noticed a few typos throughout and have highlighted them where I could, but it would be worth a final proofread (including supplementary material) before the next submission.Figure 1: I very much like this figure. Could the caption indicate whether this is annual, or austral summer?
Lines 354 - 358/Figure 1. Natural aerosols do indeed dominate the SO. But some of the differences between the PI and PD Nd could also be due to changes in non-aerosol drivers (e.g. precipitation). This is discussed later, but the logic could be brought forward earlier (in a brief sentence even), otherwise reading chronologically it might seem that this hasn’t been considered.
Figure 2 were PPEs 010, 237, and 244 chosen at random as a demonstration of the different outputs? Or do they represent PPEs that resulted in good agreement (010, 237) and poor agreement (244) between the observed and simulated Nd?
Figure 2: What each data point represents is not entirely clear. After reading further on and looking at Figure S1, I think the data points are indeed averages for individual flights, so I recommend changing “data are from individual flight tracks” to “data are averaged for each individual flight track” to make it clearer. Ideally some measure of spread/uncertainty would be expressed around those data points or stated where applicable.
Line 288 “detail” should be “detailed”
Line 292 “process” should be “processes”
Line 378: redundant use of “can”.
Line 415. The last sentence in this paragraph doesn’t read properly. Maybe it should be a comma before “we”.
Figure 7 caption references (d) and (c ) but there’s only (a) and (b). Furthermore, ideally the legend caption in the right panel would include the “,PD-PI”. The range for the grey shading indicating the observed Nd and UHSAS100 should be described in the figure caption. The rainbow colour scale isn’t colour blind or grayscale-printer friendly. I don’t know if editorial policy yet is to enforce colour-blind friendly colour scales, but if this figure ends up being redone, I’d encourage a different continuous colour scale.
Line 510: “(e.g. 262)” should be (“i.e. 262)”
Figure S5: Figure title has typo (“verus”)
References (for peer review discussion purposes only, no need to cite within text):
McCoy et al. (2021) – DOI: 10.1029/2020JD033529
Fossum et al. (2018) – DOI: 10.1038/s41598-018-32047-4
Mallet et al. (2025) – DOI: 10.1038/s41612-025-00990-5Citation: https://doi.org/10.5194/egusphere-2025-2009-RC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
503 | 48 | 18 | 569 | 24 | 13 | 30 |
- HTML: 503
- PDF: 48
- XML: 18
- Total: 569
- Supplement: 24
- BibTeX: 13
- EndNote: 30
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1