the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Understanding Aerosol-Cloud Interactions in a Single-Column Model: Intercomparison with Process-Level Models and Evaluation against ACTIVATE Field Measurements
Abstract. Marine boundary-layer clouds play a critical role in the Earth’s energy balance. Their microphysical and radiative properties are highly impacted by ambient aerosols and dynamical forcings. In this study, we evaluate the representation of these clouds and related aerosol-cloud interactions processes in the single-column version of E3SM climate model (SCM), against field measurements collected during the NASA ACTIVATE campaign over the western North Atlantic, as well as intercompare with high-resolution process-level models. Results show that E3SM-SCM, driven by the ERA5 reanalysis, reproduces the cloud properties as good as the high-resolution WRF simulations. For stronger surface forcings combined with a weaker subsidence taken from a WRF cloud-resolving simulation, both E3SM-SCM and WRF large-eddy simulation produce thicker clouds. This indicates that a proper combination of large-scale dynamics, sub-grid scale parameterizations, and model configurations is needed to obtain optimal performance of cloud simulations. In the E3SM-SCM sensitivity tests with fixed dynamics but perturbed aerosol properties, higher aerosol number concentration leads to more numerous but smaller cloud droplets, resulting in a stronger shortwave cloud forcing (i.e., stronger radiative cooling). This apparent Twomey effect is consistent with prior climate model studies. Cloud liquid water path shows a weakly positive relation with cloud droplet number concentration associated with precipitation suppression, which is different from the nonlinear relation approximated from prior observations and E3SM studies, warranting future investigation. Our findings indicate that the SCM framework is a key tool to bridge the gap between climate models, high-resolution models, and field observations to facilitate process-level understanding.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(2507 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2507 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-3149', Anonymous Referee #1, 27 Jan 2024
Understanding Aerosol-Cloud Interactions in a Single-Column Model: Intercomparison with Process-Level Models and Evaluation against ACTIVATE Field Measurements
Tang et al. 2024; egusphere-2023-3149
Review
The authors perform several modeling experiments based on a flight during the ACTIVATE campaign. First, they compare the results of range of models (WRF as CRM, WRF as LES, and E3SM as SCM) in reproducing properties of the clouds and boundary layer observed during the campaign. Then, the authors demonstrate that it is necessary to use identical simulation forcings between models to obtain comparable results between the models. The authors then perform a set of experiments to test the sensitivity E3SM-SCM to the treatment of aerosols in the model (size distribution, species, and vertical distribution).
Overall, there were several points in the manuscript that left me confused about what exactly was being done with the simulations (General Comment 1). I also find it difficult to understand the usefulness of these results because only one hand-picked case is used in these experiments (General Comment 2). Finally, I question whether the E3SM-SCM is appropriate for these science questions, as it appears to not represent the effects of aerosol scavenging, an important mechanism in aerosol-cloud-interactions (General Comment 3). To address these issues, I expect that major revisions are necessary before publication.
General Comments
- Overall, I was confused by the experimental design at several parts of the manuscript. I think the manuscript would benefit from more details for each experiment about exactly how many simulations are performed and with what model/forcing differences. Some of the Specific Comments below address this issue.
- There is only one case here and the choice of this single case is not motivated in the text. The authors correctly identify this as a limitation to the interpretation and application of their results (Line 475 and elsewhere). Utilizing only a single hand-picked case both introduces bias and limits the application of these findings to general ACI. I am not convinced these results can be impactful to the community due to this limitation. By what criteria was this case chosen instead of the other eleven “process study” cases? Most of the experiments in this work are performed with a computationally cheap single column model. It seems that more cases could be added, thereby increasing representativeness of the results without considerably increasing complexity or computational cost.
- The authors find that different aerosol concentrations strongly affect Nd but have minimal effects on the macroscale simulation properties (LWP, cloud fraction, surface rain). Only changing the hygroscopicity to 10^-10 has any effect on the macroscale properties. I wonder about the cause of this lack of macroscale aerosol effects. I wonder if the limited scope (a portion of a single hand-chosen flight of a much larger and long-lasting campaign), the model characteristics (formulations, simulation duration), or some other factor might be avoiding macroscale ACI. Please address this issue in the text.
Minor Comments
Line 19: “as good” should be replaced with “as well”.
Section 2.1: Please explain why this case was chosen (other than it was used in previous publications by the authors). Why not include any of the other 12 “process study” flights?
Line 120: Please provide a description of the Xie et al. (2019) modification. This addition can be very brief.
Line 125: “use” should be “using” and “has” should be “have”. There are a few other examples of small mix-ups like this.
Fig 2: Please stretch this figure vertically so the reader can examine changes with height.
Fig 4: The King Air observations are limited in time. It would be helpful to include GOES-16 ABI retrievals for Cloud Top Height, if they are available, as the authors have already done for Total Liquid Water Path.
Fig 6: The ACT Coarse mode fit does not represent the observations, resulting in the appearance of a too-small and too-populated coarse mode (more than 3 times as populated as the near-surface legs!). I suggest re-doing the fits but with a minimum on the fitted mu of more than 1-2 micron. It also seems that simply removing this mode from the ACT leg could be appropriate, given the observed counts above 1 micron are considerably fewer than the below-cloud legs (log-scale currently hides this). The authors mention that the coarse mode probably doesn’t exert a large effect on the simulations (and I agree) but these fits should be recalculated to avoid misleading the reader in Figure 6. Doing so requires repeating the sensitivity experiments, but the SCM framework permits computationally cheap simulations.
Figures 7-8 and related text: I am confused by these sensitivity experiments and need more description. Are they only 1 hour simulations or are they for the full 14 hours but only analyzed during the 15-16 UTC? I don’t even understand how many E3SM-SCM simulations contributed to Fig 8. Are the aerosol concentrations measured during the different flights used throughout the column but adjusted to retain a desired dependence of total number with height? Please elaborate the experiment design and exactly what is being shown in Figures 7 and 8.
Fig 7g: Please spell out the MPDW2P acronym.
Fig 10: Surface precipitation is interesting but is of course affected by sub-cloud evaporation. I think cloud-base precipitation rate would be more informative here. I suspect that the cloud-base precipitation rate is also very small, which would partially explain why aerosols seem to have little effect on macroscale quantities. Perhaps the changes in aerosol concentrations are not enough to result in macroscale changes to the simulations?
Line 326: Before stating any physical reasoning behind the positive slope, you should include an uncertainty range for the slope value and account for systematic uncertainty that would not be included in the regression-based uncertainty range, which assumes independent samples, etc. I suspect this slope falls within that uncertainty range, indicating that any physical reasoning is not meaningful. I don’t doubt the significance of the slopes in the other Fig 9 panels, but it would be a good idea to include them, as well.
Line 331-332: This is a good point and is a major limitation to this study. Utilizing only a hand-picked case both introduces bias and limits the application of these findings to general ACI.
Fig 11 and elsewhere: Please use NaCl or Salt instead of NCL.
Fig 15: I don’t know what I’ve learned from this sensitivity experiment. Are aerosols activated only in layers with prescribed aerosols? In cloudy layers that do not have prescribed aerosols, is the Nd value set to 10/cm^3, after which time cloud droplets are transported within the cloud? If scavenging is not represented in E3SM-SCM, how are we do understand the complex ACIs?
Citation: https://doi.org/10.5194/egusphere-2023-3149-RC1 - AC1: 'Reply on RC1', Shuaiqi Tang, 23 Apr 2024
-
RC2: 'Comment on egusphere-2023-3149', Anonymous Referee #2, 05 Feb 2024
Review for Understanding Aerosol-Cloud Interactions in a Single-Column Model: Intercomparison with Process-Level Models and Evaluation against ACTIVATE Field Measurements
In this paper the authors use the ACTIVATE field campaign to compare the E3SM SCM to different flavors of WRF (in both CRM and LES mode) with regards to the simulation of clouds and boundary layer turbulence observed during the campaign. The second part of the paper focuses on a set of E3SM-SCM experiments focused on the sensitivity to treatment of aerosols in the model.
While I found aspects of this paper to be interesting, it also felt like a hodge-podge of ideas/experiments that lacked a clear unifying focus as to what the authors hoped to accomplish/address. The two distinct sections of the paper feel a bit disjointed and I think the authors could do a better job tying them together a bit more. In addition, the second part of the paper focuses on just one case from ACTIVATE to draw some conclusions. I feel this needs to be addressed by testing robustness against more flights from the ACTIVATE campaign. In addition, there were several other sources of confusion in this document that need to be addressed (please see itemized list). Overall, I feel a major revision is necessary before this article is suitable for publication.
- Overall, the paper is well written enough that I understand what the authors are saying; but there are frequent typos and grammar mistakes that are distracting and needs to be addressed upon resubmission.
- In the conclusions of the paper the authors state (and elude to this on other sections of the manuscript): “A unique feature of this study is the multi-scale model intercomparison using SCM, CRM, and LES models, which provides a comprehensive process-level understanding of ACI in more details compared to individual models”. I’m left very confused by this statement. The CRM and LES models were only used in the first half to compare the macroscopic aspects of the SCM simulation (clouds, turbulence, etc.); I do not see how they were used to help understand ACI directly other than being used as a validation tool.
- I found the comparison of E3SM-SCM to WRF interesting but was confused why the authors felt it pertinent to include the SCM and LES runs with the CRM forcing. The conclusions they draw of “proper combination of large-scale dynamics, sub-grid parameterizations, and model configurations is needed to obtain performance…” seems like a super obvious conclusion that I’m not sure why they felt needed detailed analysis. Unless I’m missing something I suggest that the authors remove these curves from the figures (which are too busy with these curves included) and perhaps state in a sentence or two that they explored the sensitivity to large-scale forcing. To me this analysis and section felt like a distracting tangent.
- Page 8, line 173 the authors state “…neither resolved nor parameterized at the sub-grid scale in E3SM-SCM”. What exactly is “resolved” in an SCM? Isn’t SCM just one column where all processes are parameterized? Or am I misunderstanding something about the E3SM SCM?
- In the second half of the document, which explores E3SM-SCM to aerosol sensitivity the authors use one case and make the statement that their conclusions “warrant more cases” to test robustness. I completely agree… why not include more cases then?
- In the second half of the paper the authors state “…since Nd is underestimated it is difficult to demonstrate the value of adding aerosol vertical variation” which is blamed on weak vertical velocity updraft coming from the model. Why not do sensitivity experiments where the input vertical velocity to the aerosol activation is boasted by a certain factor to test the sensitivity? This is the type of experiment that the SCM is ideal for. In the conclusions the authors state “the evaluation of SCM simulations against the ACTIVATE measurements is helpful for understanding and improving turbulence representation over this region”. I don’t think the authors have currently done this, but experiments that show the possible improvements/sensitivity with better turbulence linked to aerosol activation could provide justification for such a statement to be retained.
- Section 4.3 ends with a statement that “E3SM SCM cannot provide information on sensitivity of aerosol to vertical distribution”… then why present results in this section at all? I feel adding a couple sentences or a paragraph to the conclusions/summary section stating that the authors attempted to address this in the SCM but couldn’t because of x, y, and z would be good and sufficient… Which could help to motivate development of a validated aerosol model in the SCM.
Citation: https://doi.org/10.5194/egusphere-2023-3149-RC2 - AC2: 'Reply on RC2', Shuaiqi Tang, 23 Apr 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-3149', Anonymous Referee #1, 27 Jan 2024
Understanding Aerosol-Cloud Interactions in a Single-Column Model: Intercomparison with Process-Level Models and Evaluation against ACTIVATE Field Measurements
Tang et al. 2024; egusphere-2023-3149
Review
The authors perform several modeling experiments based on a flight during the ACTIVATE campaign. First, they compare the results of range of models (WRF as CRM, WRF as LES, and E3SM as SCM) in reproducing properties of the clouds and boundary layer observed during the campaign. Then, the authors demonstrate that it is necessary to use identical simulation forcings between models to obtain comparable results between the models. The authors then perform a set of experiments to test the sensitivity E3SM-SCM to the treatment of aerosols in the model (size distribution, species, and vertical distribution).
Overall, there were several points in the manuscript that left me confused about what exactly was being done with the simulations (General Comment 1). I also find it difficult to understand the usefulness of these results because only one hand-picked case is used in these experiments (General Comment 2). Finally, I question whether the E3SM-SCM is appropriate for these science questions, as it appears to not represent the effects of aerosol scavenging, an important mechanism in aerosol-cloud-interactions (General Comment 3). To address these issues, I expect that major revisions are necessary before publication.
General Comments
- Overall, I was confused by the experimental design at several parts of the manuscript. I think the manuscript would benefit from more details for each experiment about exactly how many simulations are performed and with what model/forcing differences. Some of the Specific Comments below address this issue.
- There is only one case here and the choice of this single case is not motivated in the text. The authors correctly identify this as a limitation to the interpretation and application of their results (Line 475 and elsewhere). Utilizing only a single hand-picked case both introduces bias and limits the application of these findings to general ACI. I am not convinced these results can be impactful to the community due to this limitation. By what criteria was this case chosen instead of the other eleven “process study” cases? Most of the experiments in this work are performed with a computationally cheap single column model. It seems that more cases could be added, thereby increasing representativeness of the results without considerably increasing complexity or computational cost.
- The authors find that different aerosol concentrations strongly affect Nd but have minimal effects on the macroscale simulation properties (LWP, cloud fraction, surface rain). Only changing the hygroscopicity to 10^-10 has any effect on the macroscale properties. I wonder about the cause of this lack of macroscale aerosol effects. I wonder if the limited scope (a portion of a single hand-chosen flight of a much larger and long-lasting campaign), the model characteristics (formulations, simulation duration), or some other factor might be avoiding macroscale ACI. Please address this issue in the text.
Minor Comments
Line 19: “as good” should be replaced with “as well”.
Section 2.1: Please explain why this case was chosen (other than it was used in previous publications by the authors). Why not include any of the other 12 “process study” flights?
Line 120: Please provide a description of the Xie et al. (2019) modification. This addition can be very brief.
Line 125: “use” should be “using” and “has” should be “have”. There are a few other examples of small mix-ups like this.
Fig 2: Please stretch this figure vertically so the reader can examine changes with height.
Fig 4: The King Air observations are limited in time. It would be helpful to include GOES-16 ABI retrievals for Cloud Top Height, if they are available, as the authors have already done for Total Liquid Water Path.
Fig 6: The ACT Coarse mode fit does not represent the observations, resulting in the appearance of a too-small and too-populated coarse mode (more than 3 times as populated as the near-surface legs!). I suggest re-doing the fits but with a minimum on the fitted mu of more than 1-2 micron. It also seems that simply removing this mode from the ACT leg could be appropriate, given the observed counts above 1 micron are considerably fewer than the below-cloud legs (log-scale currently hides this). The authors mention that the coarse mode probably doesn’t exert a large effect on the simulations (and I agree) but these fits should be recalculated to avoid misleading the reader in Figure 6. Doing so requires repeating the sensitivity experiments, but the SCM framework permits computationally cheap simulations.
Figures 7-8 and related text: I am confused by these sensitivity experiments and need more description. Are they only 1 hour simulations or are they for the full 14 hours but only analyzed during the 15-16 UTC? I don’t even understand how many E3SM-SCM simulations contributed to Fig 8. Are the aerosol concentrations measured during the different flights used throughout the column but adjusted to retain a desired dependence of total number with height? Please elaborate the experiment design and exactly what is being shown in Figures 7 and 8.
Fig 7g: Please spell out the MPDW2P acronym.
Fig 10: Surface precipitation is interesting but is of course affected by sub-cloud evaporation. I think cloud-base precipitation rate would be more informative here. I suspect that the cloud-base precipitation rate is also very small, which would partially explain why aerosols seem to have little effect on macroscale quantities. Perhaps the changes in aerosol concentrations are not enough to result in macroscale changes to the simulations?
Line 326: Before stating any physical reasoning behind the positive slope, you should include an uncertainty range for the slope value and account for systematic uncertainty that would not be included in the regression-based uncertainty range, which assumes independent samples, etc. I suspect this slope falls within that uncertainty range, indicating that any physical reasoning is not meaningful. I don’t doubt the significance of the slopes in the other Fig 9 panels, but it would be a good idea to include them, as well.
Line 331-332: This is a good point and is a major limitation to this study. Utilizing only a hand-picked case both introduces bias and limits the application of these findings to general ACI.
Fig 11 and elsewhere: Please use NaCl or Salt instead of NCL.
Fig 15: I don’t know what I’ve learned from this sensitivity experiment. Are aerosols activated only in layers with prescribed aerosols? In cloudy layers that do not have prescribed aerosols, is the Nd value set to 10/cm^3, after which time cloud droplets are transported within the cloud? If scavenging is not represented in E3SM-SCM, how are we do understand the complex ACIs?
Citation: https://doi.org/10.5194/egusphere-2023-3149-RC1 - AC1: 'Reply on RC1', Shuaiqi Tang, 23 Apr 2024
-
RC2: 'Comment on egusphere-2023-3149', Anonymous Referee #2, 05 Feb 2024
Review for Understanding Aerosol-Cloud Interactions in a Single-Column Model: Intercomparison with Process-Level Models and Evaluation against ACTIVATE Field Measurements
In this paper the authors use the ACTIVATE field campaign to compare the E3SM SCM to different flavors of WRF (in both CRM and LES mode) with regards to the simulation of clouds and boundary layer turbulence observed during the campaign. The second part of the paper focuses on a set of E3SM-SCM experiments focused on the sensitivity to treatment of aerosols in the model.
While I found aspects of this paper to be interesting, it also felt like a hodge-podge of ideas/experiments that lacked a clear unifying focus as to what the authors hoped to accomplish/address. The two distinct sections of the paper feel a bit disjointed and I think the authors could do a better job tying them together a bit more. In addition, the second part of the paper focuses on just one case from ACTIVATE to draw some conclusions. I feel this needs to be addressed by testing robustness against more flights from the ACTIVATE campaign. In addition, there were several other sources of confusion in this document that need to be addressed (please see itemized list). Overall, I feel a major revision is necessary before this article is suitable for publication.
- Overall, the paper is well written enough that I understand what the authors are saying; but there are frequent typos and grammar mistakes that are distracting and needs to be addressed upon resubmission.
- In the conclusions of the paper the authors state (and elude to this on other sections of the manuscript): “A unique feature of this study is the multi-scale model intercomparison using SCM, CRM, and LES models, which provides a comprehensive process-level understanding of ACI in more details compared to individual models”. I’m left very confused by this statement. The CRM and LES models were only used in the first half to compare the macroscopic aspects of the SCM simulation (clouds, turbulence, etc.); I do not see how they were used to help understand ACI directly other than being used as a validation tool.
- I found the comparison of E3SM-SCM to WRF interesting but was confused why the authors felt it pertinent to include the SCM and LES runs with the CRM forcing. The conclusions they draw of “proper combination of large-scale dynamics, sub-grid parameterizations, and model configurations is needed to obtain performance…” seems like a super obvious conclusion that I’m not sure why they felt needed detailed analysis. Unless I’m missing something I suggest that the authors remove these curves from the figures (which are too busy with these curves included) and perhaps state in a sentence or two that they explored the sensitivity to large-scale forcing. To me this analysis and section felt like a distracting tangent.
- Page 8, line 173 the authors state “…neither resolved nor parameterized at the sub-grid scale in E3SM-SCM”. What exactly is “resolved” in an SCM? Isn’t SCM just one column where all processes are parameterized? Or am I misunderstanding something about the E3SM SCM?
- In the second half of the document, which explores E3SM-SCM to aerosol sensitivity the authors use one case and make the statement that their conclusions “warrant more cases” to test robustness. I completely agree… why not include more cases then?
- In the second half of the paper the authors state “…since Nd is underestimated it is difficult to demonstrate the value of adding aerosol vertical variation” which is blamed on weak vertical velocity updraft coming from the model. Why not do sensitivity experiments where the input vertical velocity to the aerosol activation is boasted by a certain factor to test the sensitivity? This is the type of experiment that the SCM is ideal for. In the conclusions the authors state “the evaluation of SCM simulations against the ACTIVATE measurements is helpful for understanding and improving turbulence representation over this region”. I don’t think the authors have currently done this, but experiments that show the possible improvements/sensitivity with better turbulence linked to aerosol activation could provide justification for such a statement to be retained.
- Section 4.3 ends with a statement that “E3SM SCM cannot provide information on sensitivity of aerosol to vertical distribution”… then why present results in this section at all? I feel adding a couple sentences or a paragraph to the conclusions/summary section stating that the authors attempted to address this in the SCM but couldn’t because of x, y, and z would be good and sufficient… Which could help to motivate development of a validated aerosol model in the SCM.
Citation: https://doi.org/10.5194/egusphere-2023-3149-RC2 - AC2: 'Reply on RC2', Shuaiqi Tang, 23 Apr 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
379 | 118 | 34 | 531 | 23 | 27 |
- HTML: 379
- PDF: 118
- XML: 34
- Total: 531
- BibTeX: 23
- EndNote: 27
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Xiang-Yu Li
Jingyi Chen
Armin Sorooshian
Xubin Zeng
Ewan Crosbie
Kenneth L. Thornhill
Luke D. Ziemba
Christiane Voigt
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2507 KB) - Metadata XML