the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Calibrating Interdependent Photochemistry, Nucleation, and Aerosol Microphysics in Chamber Experiments
Abstract. Laboratory experiments addressing complex phenomena such as atmospheric new-particle formation and growth typically involve numerous instruments measuring a range of key coupled variables. In addition to independent calibration, the combined dataset provides not just constraints on the parameters of interest but also on the critical instrument calibrations. Here we find good agreement between production and loss rates of sulfuric acid (H2SO4) in an experiment performed at the CERN CLOUD chamber involving oxidation of sulfur dioxide (SO2) in the presence of ammonia (NH3) at 58 % relative humidity, driving new-particle formation and growth of particles by H2SO4 + NH3 nucleation initiated by O3 photolysis via several light sources. This closure requires consistency across numerous parameters, including: the particle number and size distribution; their condensation sink for H2SO4; the particle growth rates; the concentration of H2SO4; and the nucleation coefficients for both neutral and ion-induced pathways. Our study shows that accurate agreement can be achieved between production and loss of condensable vapors in laboratory chambers under atmospheric conditions, with accuracy ultimately tied to particle number measurement (i.e. a condensation particle counter). This, in turn implies parameters such as the H2SO4 concentration and particle size distributions can be determined to a comparable precision.
Competing interests: At least one of the (co-)authors is a member of the editorial board of Atmospheric Measurement Techniques.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.- Preprint
(8788 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Comment on egusphere-2025-2412', Anonymous Referee #1, 28 Sep 2025
-
RC2: 'Comment on egusphere-2025-2412', Anonymous Referee #2, 07 Jan 2026
This manuscript presents a comprehensive and highly rigorous framework for the calibration of complex chamber experiments, using the CLOUD facility as a case study. The authors tackle the formidable challenge of disentangling interdependent variables—ranging from gas-phase photochemistry and wall losses to aerosol microphysics and charge dynamics—by employing a novel strategy of “disaggregating loss terms.” I find this work to be of exceptional quality. The theoretical depth, particularly the treatment of Van der Waals enhancements to explain condensational narrowing and the nuanced analysis of “wet vs. dry” growth pathways, provides significant physical insights beyond mere calibration. The manuscript is honest in its assessment of uncertainties (e.g., the potential inhomogeneity of mixing) and demonstrates a high level of fidelity between the modeled processes and the observational data. This study not only validates the data quality of the CLOUD experiment but also establishes a new methodological benchmark for the interpretation of atmospheric simulation chamber data. I recommend publication subject to Minor Revisions. The manuscript is well-written and logically structured. However, I have identified a few areas where the clarity of the methodological innovation and the rigorousness of the mathematical notation could be improved. Addressing the following specific points will, in my view, further strengthen the impact and readability of this excellent work.
- I recommend a structural change to highlight the methodological advancement: Please separate the text in lines 28-33 (from “Our objective is to scale...” to “...among all of the factors”) into a new, standalone paragraph. These lines define the study’s core innovation—the shift from seeking scalar calibration factors (F) to optimizing a joint probability density function () that explicitly accounts for covariance. This separation is necessary to ensure the reader immediately grasps the paradigm shift in your calibration methodology, rather than viewing it as a mere continuation of standard procedures.
- There is a notation conflict regarding alt. In line 64, the text states “the amplitude of each light appears as alt”. However, in line 119, alt is redefined as the “calibration factor” while slt represents the amplitude. This inconsistency must be resolved to avoid confusing the measured signal with the calibration coefficient. Furthermore, given the paper’s focus on calibration, it is recommended that the authors explicitly break down alt into its physical components (i.e. cross-sections, quantum yields, and geometric efficiency). Additionally, line 263 introduces a new variable . What’s the difference between and alt?
- The authors express the production rate as line 258. At first glance, explicitly including the measurement calibration factor () in the definition of the physical production rate (P) seems counter-intuitive, as the actual chemical production is physically independent of the CIMS detection efficiency. I presume this formulation is a mathematical strategy to decouple the absolute magnitude scaling from the relative non-linearities, thereby keeping close to unity during the optimization process after reading line 259–262. If this is the case, please clarify this rationale in the text. Without explanation, it appears as though the authors are conflating the measurement equation with the process equation. Readers need to understand that this is a parameterization choice for the inverse problem, not a statement of physical dependency.
Citation: https://doi.org/10.5194/egusphere-2025-2412-RC2 -
RC3: 'Comment on egusphere-2025-2412', Anonymous Referee #3, 08 Jan 2026
This manuscript presents a study that integrates observations and models in a novel way attempting to reduce parametric uncertainty in both the collection of those observations and the modeling of those observations - a version of a “digital twin” of the chamber. This is quite an ambitious attempt, even with the qualifications the authors have made regarding the reduced scope of the work (e.g., not attempting the full Bayesian approach to uncertainty quantification). There a number of key findings in the study that are novel, including:
- how even the temporal response (as opposed to the absolute value) of a gas-phase species concentration (e.g., sulfuric acid) can be a powerful constraint on both aerosol chamber processes (e.g., first-order loss) AND inherent sampling uncertainties associated with chemical ionization
- the deliberate choice of an experimental system (e.g., sulfuric acid/ammonia/water ternary nucleation) that is “constrained enough” (e.g., most of the model uncertainty rests in parameter uncertainty rather than process/structural uncertainty) to permit a somewhat straightforward “tuning” of the system
- the somewhat serendipitous aspects of the experimental runs that observed a sufficient dynamic range in process contributions that allowed for a clearer reduction in measurement and process uncertainty
- the identification and use of a reference instrument/observation (e.g., CPC total number concentration), in combination with a “simple” process system, to perform a step-wise constraint going “back-ward” from an endpoint (e.g., aerosol number concentration) to a starting point (e.g., photolysis).
- a workflow that can be used to address uncertainty more broadly in other systems.
This work should be published, but I do have some concerns that I hope can be addressed regarding length and clarity as described below:
- lines 29-30: Do these models also contain "parameters" with varying degrees of uncertainty (e.g., nucleation exponents) that should also be incorporated into the overall optimization? To what degree is model "structural" uncertainty (e.g., mis-represented/missing processes) a consideration in this workflow?
- Figure 1:
- Do the "arrows" represent process functional dependencies? (e.g., rates of production/loss as functions of concentrations and rate constants, etc.?) Can some explanation on what the “arrows” mean be included?
- Would Figure 1 be a convenient place to introduce some of these process “reaction steps” that more explicitly include parameter definitions?
- lines 73-74: What does “semi” independently calibrated mean?
- line 76: Can you perhaps add some statement/reference on the observed robustness/stability of O3/SO2 calibration seen in other studies, etc.?
- line 77: Should it be instead “All gas-phase instruments are corrected for vapor losses during sampling…”?
- line 82: Can an appropriate reference for the CPC2.5 be included? e.g., ideally point towards an experimentally-derived size-dependent counting efficiency during CLOUD?
- line 89: Can we define these process symbols (e.g., PH2SO4, jO(1D)) either in text or in figure 1?
- line 93: How is the "amplitude" of the size distribution measurement being precise different from the "absolute magnitude" of the SMPS being less accurate described in line 95?
- line 95: Given the size range of the DMA-train, I would have thought that uncertainties in charging efficiencies (especially below 3 nm) would be an additional source of unknown uncertainty.
- line 96: Can a citation be included for the AIS that can point readers to “systematic uncertainties”?
- Figure 2:
- panel (a) Can a short description/explanation of the vertical dashed gray lines referring to the individual “stages” be added to assist the reader?
- panel (b) Are the time scales in panels a.) and b.) intended to be aligned? A consistent time-scale in those panels (run time vs. time of day) would be helpful to the reader.
- panel (b) Perhaps use “stage” instead of “sub-run” for reader consistency?
- panel (b) Revise to “follows the steps in light intensity as seen in panel (a)”?
- panel (c.) Can it be made more clear where the merge is between the nano-SMPS and SMPS data that contribute to the observed size distribution?
- lines 99-100: Can there be some consistency between the phrases “co-condensation of H2SO4 and NH3” and “H2SO4-limited growth” that are frequently used throughout the manuscript?
- line 104: Would it not be more accurate to state that the instruments provide direct measurements of particle concentration and number size distribution as opposed to "support constrained and accurate determination”?
- line 106: Would it not be more accurate to say that the size distribution and total number are directly measured (as opposed to well-constrained), and additionally provide constraints on other processes?
- line 153: Can these parenthetical references (e.g., UVH, UVX, LED) be also added to the Figure 2a.) legend?
- line 162: Is the “H2SO4 depletion” referring to the more gradual periods of decreasing concentration mentioned just before (e.g., stages 5-6 and 14-15)? A clarification would be appreciated.
- Figure 3:
- What is meant by “Raw” charged particle observations? Are these data prior to any correction or inversion?
- line 193: A more general comment that the term “first-order” is used extensively throughout the manuscript, and some clarification on what “first-order” refers to might be helpful, e.g., first-order with respect to number concentration.
- line 193: For the phrase “requiring only precise H2SO4 measurements”, would it not require also that the calibration is constant (albeit unknown) over the time period of the first-order decay, not just precise measurements?
- Figure 4:
- What is meant by “raw” chemical drivers for H2SO4 formation? Is this raw data that has not been corrected or calibrated?
- line 204: “All the data in Figs 2 and 4 require some amount of refinement.” Would the term "data conditioning" be more accurate than "refinement"?
- line 205: What is meant by "sharp edges”? Is it more like “maintain the temporal dynamics”, or “sudden changes in concentration”?
- line 211: How does the particle growth time compare to the chamber mixing time?
- line 215: How does the number concentration uncertainty change as this "step-function" assumption is relaxed to the more experimentally-observed “sloping” size-dependence in the cut-off size range?
- line 296: Can it be re-mentioned that the multiple instruments contributing to the “merged” number size distribution include the DMA-train, nano-SMPS, and regular SMPS?
- line 305: Is this derivation adding substantively to the discussion/interpretation? Could it be possibly moved to a supplement? A more general comment: perhaps just emphasize the contributions of the various constraining parameters?
- line 321: Is the term “αi,pEμi,p ϵi,p ei,pBi,p” the “overall collision adjustment”? If so, having the term “si/4” right after it is somewhat confusing. In other words, it might be easier to read if the term “αi,pEμi,p ϵi,p ei,pBi,p” follows right after the phrase “overall collision adjustment” so that term “si/4” can follow right after “nominal H2SO4 condensation speed”.
- Figure 5:
- In the y-axis label in panel (a), where is “vdep/(c/4)” defined?
- line 332: How does this use of "data conditioning"/interpolants improve precision? The precision of retrieved parameters?
- line 333: Why not experimentally characterize (as well) the detection efficiencies of the combined nano-SMPS and regular-SMPS system so that quantitative closure could have been achieved between total number concentration (CPC2.5) and integrated total number concentration (SMPS)? Additionally, as an aside, how were charging and counting efficiency corrections at sizes below 7.2 nm determined and applied?
- Figure 6:
- “Particle measurements after calibration to CPC2.5 values”. Strictly speaking, this is not a calibration, but a “scaling”.
- “the constraints are especially strong”. What does it mean for a constraint to be “strong” Is a “strong constraint” equivalent to “low measurement uncertainty”?
- line 339: “(after correcting for the SMPS undercount)”. Is this argument of CPC and SMPS agreement after scaling resting on the observation that the "calibration factor" is "constant" over size?
- line 364-365: How is "accurate constraint" vs. "precise constraint" distinguished here?
- line 368: Should “run 1844” be “run 1833)?
- Figure 7:
- How sensitive is the optimization/calibration workflow to the contribution of this “small source of unknown origin”?
- line 376: “these (precise) level steps”. What is “precise” about these level steps?
- line 381-382: Is the “small unknown source of H2SO4” really considered "negligible" if it is also included in the optimization workflow?
- line 403: The parenthetical “horizontal” can be removed here.
- line 409-411: If condensational loss is contributing around 50% to the total loss during this period, this seems to imply to me that particle measurements (e.g., those from which CS is determined) are important, as opposed to “regardless of any particle measurements”.
- line 422-423: Are these two models run in a "coupled" fashion?
- line 441: “This is crude, but because the fraction removed is small, errors caused by this crude treatment are negligible.” Can this be verified quantitatively?
- line 448-449: Why not simply implement the (cited) ternary nucleation parameterizations (exponents and pre-factors) instead of treating the nucleation coefficients as “adjustable” parameters in the optimization? My thinking is that either consistently: 1.) fully use the best information available, or 2.) if not using the experimentally-derived parameterizations, then treat the ternary nucleation parameterization itself as a fully adjustable parameterization.
- line 464-465: Why does apparent "agreement" between model nucleation rates influence choice selection?
- line 474: “and the figures are easier to read with this narrower time range”. It's not clear to me how "readability" should be used as a criteria for which stages to analyze?
- line 482: “When species are not effectively non-volatile, that in turn depends on the volatility of the vapor species”. This statement seems awkward and redundant. It equivalently reads "when species are volatile, that in turn depends on the volatility..." Can this be clarified?
- line 485: Can the partition coefficient “Ki,p” be defined?
- line 529-530: Was there not sample drying prior to charging and mobility classification?
- line 567-570: Why would the nucleation coefficient vary with stage? Is the nucleation mechanism different between stages?
- Figure 10:
- “Essentially all the features of the observations are reproduced by the simulation with good fidelity, including the timing, intensity, growth rate, and final size of the nucleation bursts during each stage.” Where is this model-measurement comparison of timing, intensity, growth rate, and final size described (and/or presented) quantitatively?
- line 579-580: Where is the growth rate comparison illustrated?
- line 590-592: What does “inhomogeneity” refer to? Spatial distribution?
- line 600: Where is it shown/illustrated that the nucleation rate coefficients remain nearly constant as sulfuric and ion concentrations vary? (and what does “nearly” constant mean”?)
- line 611-613: How is "clear signs of three modes" consistent with "hint of a maxima at 5 nm"? To me, “hint” is not equivalent to “clear”. It might be worth including some cautionary statement on the potential for "over-fitting" modes.
- Figure 11:
- “(a) Measured size distribution, combining DMA train and scaled SMPS values”. Were the DMA-train data also scaled to ensure good "overlap" with the scaled SMPS data?
- line 653: Can “light intensity” be slightly changed to “changes in light intensity”?
- line 657: “At a minimum it is important to interpolate these observed mixing ratios”. What does this mean? Is this interpolation over time?
- line 659: Is the “wall timescale” referring to the “wall-loss timescale”?
- Figure 12:
- “(c) Measured and modeled H2SO4 (with modeled HO2 and OH)”. I'm assuming the red circles are observations? Can this be stated? Also, what is the red solid line referring to? And are the HOx and HO2 lines overlapping?
- line 673: Should “major contributing” be revised to “major contributing processes”?
- Figure 13:
- “Simulated (curve) and observed (points) gas-phase precursors with smoothed and interpolated measurements”. Just for confirmation, is it the observed points that are smoothed and interpolated? Does it not make more "sense" to interpolate model output to observed time scales?
- line 690: The phrase “there is nowhere to hide” feels editorial and non-technical. Is it necessary to include?
- line 736: Does nucleation at 2 nm need to be consistent with the sizes at which nucleated clusters form in the sectional model? (e.g., at 0.8 nm)?
- Figure 14:
- panel a: Are the points the observed ion signals? Can this be stated?
- panel c: What are symbols/line colors for the simulated and observed data?
- “Finally, the simulated N5 trace reasonably reproduces the CPC2.5 observations (shown in gray)”. Are these referring to the filled circles? They don’t look gray to me.
- line 774: “this modal simulation reproduces the observations with good fidelity.” What are the quantitative criteria for determining "good fidelity"?
- line 794: “quantitatively reproduces the concentration dips”. Without well-defined metrics for evaluation, this reproduction seems more qualitative than quantitative.
- line 817: instead of “ are only meaningful”, could you not use instead “are interpretable”? E.g., these experiments are meaningful only because they are interpretable.
- Figure 19:
- “The system is often not in steady state (the dashed net rate is not always zero)”. The net rate line does not appear to be "dashed" to my eyes.
- line 941: “Uncertainties in this analysis rest on the accuracy of CPC2.5 data and the precision of the H2SO4 measurements”. Does analysis uncertainty not also critically depend on DMA-train uncertainties (e.g., size-dependent counting and charging efficiencies below 7 nm)?
- line 944-945: Can further justification for "correcting" the SMPS data be provided?
- line 945-946: What does it mean for "precise H2SO4 behavior" to be "strong"?
- line 955-957: I find it somewhat concerning that there was no sample drying prior to charging and mobility classification in the SMPS. Additionally, would this also not impact CPC measurement in particular for sizes that have strong size-dependent activation?
- line 964: “calibration factor is constrained to within approximately 0.1”. What does this mean? That the calibration factor uncertainty is constrained to with 10%?
- line 997: “A formal optimal error estimation will require a rigorous Bayesian treatment with a microphysical model that fully couples both the gas-phase chemistry and the evolving particle charge distribution.” I would argue that a rigorous treatment would also include formal instrument simulators (and their transfer functions) that capture "transfer function error/uncertainty".
Citation: https://doi.org/10.5194/egusphere-2025-2412-RC3
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 1,156 | 133 | 31 | 1,320 | 33 | 34 |
- HTML: 1,156
- PDF: 133
- XML: 31
- Total: 1,320
- BibTeX: 33
- EndNote: 34
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
The study by Donahue et al. presents an interesting approach to constraining H₂SO₄ production and loss based on particle number measurements, effectively through an optimization exercise across a multiparameter space coupled with modeling, ultimately claiming to minimize systematic and random calibration biases. Although lengthy, the paper is very well written, and the CLOUD team has made significant contributions in this area. I believe the paper could be publishable in AMT after the authors address some considerations, which I detail below.
Major comments:
Minor comments: