the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluation of Liquid Cloud Albedo Susceptibility in E3SM Using Coupled Eastern North Atlantic Surface and Satellite Retrievals
Abstract. The impact of aerosol number concentration on cloud albedo is a persistent source of spread in global climate predictions due to multi-scale, interactive atmospheric processes that remain difficult to quantify. We use 5 years of geostationary satellite and surface retrievals at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Eastern North Atlantic (ENA) site in the Azores to evaluate the representation of liquid cloud albedo susceptibility for overcast cloud scenes in DOE Energy Exascale Earth System Model version 1 (E3SMv1) and provide possible reasons for model-observation discrepancies.
The overall distribution of surface 0.2 % CCN concentration values is reasonably simulated but simulated liquid water path (LWP) is lower than observed and layer-mean droplet concentration (Nd) comparisons are highly variable depending on the Nd retrieval technique. E3SMv1’s cloud albedo is greater than observed for given LWP and Nd values due to a lesser cloud effective radius than observed. However, the simulated albedo response to Nd is suppressed due to a solar zenith angle (SZA)-Nd correlation created by the seasonal cycle that is not observed. Controlling for this effect by examining the cloud optical depth (COD) shows that E3SMv1’s COD response to CCN concentration is greater than observed. For surface-based retrievals, this is only true after controlling for cloud adiabaticity because E3SMv1’s adiabaticities are much lower than observed. Assuming a constant adiabaticity in surface retrievals as done in TOA retrievals narrows the retrieved lnNd distribution, which increases the cloud albedo sensitivity to lnNd to match the TOA sensitivity.
The greater sensitivity of COD to CCN is caused by a greater Twomey effect in which the sensitivity of Nd to CCN is greater than observed for TOA-retrieved Nd, and once model-observation adiabaticity differences are removed, this is also true for surface-retrieved Nd. The LWP response to Nd in E3SMv1 is overall negative as observed. Despite reproducing the observed LWP-Nd relationship, observed clouds become much more adiabatic as Nd increases while E3SMv1 clouds do not, associated with more heavily precipitating clouds that are partially but not completely caused by deeper clouds and weaker inversions in E3SMv1. These cloud property differences indicate that the negative LWP-Nd relationship is likely not caused by the same mechanisms in E3SMv1 and observations. The negative simulated LWP response also fails to mute the excessively strong Twomey effect, highlighting potentially important confounding factor effects that likely render the LWP-Nd relationship non-causal. Nd retrieval scales and assumptions, particularly related to cloud adiabaticity, contribute to substantial spreads in the model-observation comparisons, though enough consistency exists to suggest that aerosol activation and convective drizzle processes are critical areas to focus E3SMv1 development for improving the fidelity of aerosol-cloud interactions in E3SM.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(18517 KB)
-
Supplement
(11140 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(18517 KB) - Metadata XML
-
Supplement
(11140 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-998', Anonymous Referee #1, 22 Jun 2023
As indicated by the title, the manuscript reports the evaluation results of albedo susceptibility of marine warm clouds calculated from E3SM simulations. The evaluations make good use of ground- and satellite-based measurements, products, and approaches, and the analyses are comprehensive and thorough. Although in the end, the main sources of the model deficiencies are not surprising and have been pointed out by the literature, the results presented here are important for the users of E3SM. Hence, I recommend it for publication with some minor revisions for clarity and reproducibility.
1) A large focus of the manuscript has been placed on adiabaticity. It would be most useful for readers to see and understand the distribution of model adiabaticity and the observation-model difference in the early section of the manuscript, which will provide much better context for all the tests (E3SM-sfc, E3SM-TOA, etc.) Otherwise, section 2.4 is quite confusing, and the main goal of those tests does not stand out immediately.
2) Equation 4: It may be useful to decompose further to include adiabaticity explicitly. I would also suggest expanding it for cloud optical depth and effective radius as well. This would help to connect equations, text, and figures in a much more systematic way.
3) Please say a few words on how E3SM determines cloud effective radius and compute albedo, and how E3SM simulations differ from COSP calculations. In various tests, has the relationship between liquid water path, cloud optical depth, and cloud effective radius been retained? If not, how does this impact on the interpretation?
4) ENA site is known to have an island effect. Please say a few words on its impact if the island effect has not been considered and removed in the dataset.
5) Min and Harrison (1996) used MWR-based LWP retrieval and flux-based cloud optical depth retrieval to compute effective radius. When the solution is not converging, a default cloud effective radius is used. Please comment on how this dataset has been handled and what is the possible impact if the default situations have not been excluded.
6) Do you really mean to cite McComiskey et al. (2009) for Equation (1)?
7) Please define cloud effective albedo and check the reference, since Long and Ackerman (2000) talked about cloud radiative effect on downward flux, but not albedo.
8) Line 310: I don't understand the logic here. Please elaborate on this a bit.
9) Line 417: I wouldn't say that they are different. In fact, they cover the same conditions, but the observations cover a wider range. The question is why - do you know?
10) Line 523: Please be more specific about the forcing you meant here.
11) Adiabaticity of 200%? Is this mislabeled?
Citation: https://doi.org/10.5194/egusphere-2023-998-RC1 -
RC2: 'Comment on egusphere-2023-998', Zhibo Zhang, 09 Jul 2023
Review of Evaluation of “Liquid Cloud Albedo Susceptibility in E3SM Using Coupled Eastern North Atlantic Surface and Satellite Retrievals” by Varble et al.
The manuscript presents a comparison study of the susceptibility of liquid-phase clouds in the North Atlantic region (i.e., ENA site) to aerosols perturbations in an Earth System Model (E3SMv1) vs. that derived based on satellite and ground based observations. The study carefully collocated the satellite and ground observations, and used the COSP simulator to ensure that different sources of observations and model simulations are comparable. The comparison reveals several differences in both the mean state and susceptibility of liquid clouds between E3SM simulation and observed counterparts, for example, a greater COD susceptibility to CCN in the E3SM than observation. The potential causes of the differences are analyzed and discussed.
Although the scope of this study is focused/limited, e.g., susceptibility in a particular model in a small region, the methods used in this study and the lessons learned are useful for the user community of E3SM and to the broad community, especially ESM modelers. The manuscript provides in-depth analyses of the model simulations, a bit overwhelming though, and strives to obtain process-level understanding of the potential problems in the model and factors causing observation-model differences. Overall, it is an illuminating paper with useful information, that should be accepted for publication. On the other hand, some parts of the paper are hard to follow, especially for readers not familiar with ESMs. In addition, a few questions, and comments, a listed below, need to be addressed and clarified in the revised paper.
Major comments:
- The motivation and objectives for this study, and its potential significances need to be better explained in the Introduction and echoed in the other parts of the paper. I understand that, to the developers and main users of the E3SM, a comparison study like this is important to understand the performance of the model and the diagnose the potential problems. I also understand that DOE-ARM program has a permanent site in the ENA region which us probably why the study is limited to ENA region. But I guess a significant portion, if not the majority, of the readers are not aware of this background information. So, I would suggest adding some background information to the Introduction about the E3SMv1 model, e.g., why it is an important model, whether it is widely used, why should we care about v1 when other versions are available etc., as well as the significance of the ENA region for aerosol-cloud interactions studies. It will help the readers understand the significance of this study and see potential relevance of this paper to their own research.
- It seems that the assumption of cloud adiabaticity is an important factor causing the observation-model differences. But I don’t think I understand the method and logic for scaling the surface retrievals based on 80% adiabaticity assumption in section 4.1.1 (to me this section is quite confusing). If my understanding is correct, the cloud adiabaticity is diagnosed from the adiabatic cloud LWP (which is estimated based on cloud base T and P and cloud thickness) and observed “true” LWP. As such, the surface-based estimate of Nd can account for the variability of cloud adiabaticity, therefore arguably more accurate. How is it justified to scale a more accurate measurement based on a simple assumption? Wouldn’t a scaling of satellite observations based on surface estimate of adiabaticity be more reasonable? Moreover, the cloud base, top and depth retrievals based on ground based lidar and radar should be pretty accurate. I don’t understand what the basis is for scaling the cloud depth in the computation of Nd. To me a reasonable pathway for “apple-to-apple” comparison is to derive an adiabaticity climatology (e.g., monthly mean) or parameterization scheme that can be used to scale satellite observations.
- It is also not clear to me how model simulations are scaled based on 80% adiabaticity assumption. As discussed later in the paper, E3SM severely underestimates cloud adiabaticity. But once the adiabaticity is changed, it seems to affect everything, not only Nd, but also LWP to COD to Reff and to cloud albedo. Are they all adjusted based on 80% adiabaticity and if so, how? Otherwise, if only Nd is adjusted, then it adjected Nd is no longer physically consistent with other quantalities right? Overall, I’m confused by the objective for adiabaticity-based scaling. Is it intended to demonstrate retrieval uncertainty or model uncertainty?
- It seems from Figure 4 that the LWP statistics is simulated reasonably well in the model. However, the discussion in Section 4.2.1 suggests the cloud adiabaticity is underestimated. Does this mean that the clouds are too physically thick in the model, or the cloud base heights are biased? Is it possible to compare the model simulated cloud base and thickness with, for example, ground observations? Sorry if I missed it.
- In Figure 17, the overall cloud albedo susceptibility to CCN (dA/dlnCCN) is positive according to observations (both satellite and surface) whereas it is negative in the model if E3SM is sampled using TOA perspective. This seems to be an interesting and potentially important results. But there isn’t much discussion about it. What is the cause for that (too negative LWP responses in the model?)?
- Another thing I failed to understand about Figure 17 is that the uncertainty bar (95% confidence interval) is larger in the observation than the model, but the correlation coefficient (r-value) is significantly smaller in the model than observation. Are the two related and results self-consistent?
- The exact meaning of “Twomey effect” is vague and sometimes confusing in the paper (e.g., Figure 17 and 18). Does it simply mean . I would suggest using equations to be clear.
Minor comments:
- Around line 90: Zhang et al. (2022) used the joint PDF of LWP and Nd to understand the microphysical control and spatial-temporal variability of warm rain probability. It should be cited here along with other recent studies.
- Line 140: “why” should be “what”
- Line 396: Some caveat should be noted when claiming that the simulation of negative LWP response in the E3SM is a “success” because the discussion later seems to suggest that to be a result of error cancelation (i.e., right for the wrong reason).
- Finally, I applaud and appreciate the effort of the authors to make all the data and scripts used in this study publicly available (see Code and Data Availability session).
Reference:
Zhang, Z., Oreopoulos, L., Lebsock, M. D., Mechem, D. B., & Covert, J. (2022). Understanding the microphysical control and spatial-temporal variability of warm rain probability using CloudSat and MODIS observations. Geophysical Research Letters, 49, e2022GL098863
Citation: https://doi.org/10.5194/egusphere-2023-998-RC2 -
AC1: 'Response to reviewers', Adam Varble, 07 Sep 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-998/egusphere-2023-998-AC1-supplement.pdf
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-998', Anonymous Referee #1, 22 Jun 2023
As indicated by the title, the manuscript reports the evaluation results of albedo susceptibility of marine warm clouds calculated from E3SM simulations. The evaluations make good use of ground- and satellite-based measurements, products, and approaches, and the analyses are comprehensive and thorough. Although in the end, the main sources of the model deficiencies are not surprising and have been pointed out by the literature, the results presented here are important for the users of E3SM. Hence, I recommend it for publication with some minor revisions for clarity and reproducibility.
1) A large focus of the manuscript has been placed on adiabaticity. It would be most useful for readers to see and understand the distribution of model adiabaticity and the observation-model difference in the early section of the manuscript, which will provide much better context for all the tests (E3SM-sfc, E3SM-TOA, etc.) Otherwise, section 2.4 is quite confusing, and the main goal of those tests does not stand out immediately.
2) Equation 4: It may be useful to decompose further to include adiabaticity explicitly. I would also suggest expanding it for cloud optical depth and effective radius as well. This would help to connect equations, text, and figures in a much more systematic way.
3) Please say a few words on how E3SM determines cloud effective radius and compute albedo, and how E3SM simulations differ from COSP calculations. In various tests, has the relationship between liquid water path, cloud optical depth, and cloud effective radius been retained? If not, how does this impact on the interpretation?
4) ENA site is known to have an island effect. Please say a few words on its impact if the island effect has not been considered and removed in the dataset.
5) Min and Harrison (1996) used MWR-based LWP retrieval and flux-based cloud optical depth retrieval to compute effective radius. When the solution is not converging, a default cloud effective radius is used. Please comment on how this dataset has been handled and what is the possible impact if the default situations have not been excluded.
6) Do you really mean to cite McComiskey et al. (2009) for Equation (1)?
7) Please define cloud effective albedo and check the reference, since Long and Ackerman (2000) talked about cloud radiative effect on downward flux, but not albedo.
8) Line 310: I don't understand the logic here. Please elaborate on this a bit.
9) Line 417: I wouldn't say that they are different. In fact, they cover the same conditions, but the observations cover a wider range. The question is why - do you know?
10) Line 523: Please be more specific about the forcing you meant here.
11) Adiabaticity of 200%? Is this mislabeled?
Citation: https://doi.org/10.5194/egusphere-2023-998-RC1 -
RC2: 'Comment on egusphere-2023-998', Zhibo Zhang, 09 Jul 2023
Review of Evaluation of “Liquid Cloud Albedo Susceptibility in E3SM Using Coupled Eastern North Atlantic Surface and Satellite Retrievals” by Varble et al.
The manuscript presents a comparison study of the susceptibility of liquid-phase clouds in the North Atlantic region (i.e., ENA site) to aerosols perturbations in an Earth System Model (E3SMv1) vs. that derived based on satellite and ground based observations. The study carefully collocated the satellite and ground observations, and used the COSP simulator to ensure that different sources of observations and model simulations are comparable. The comparison reveals several differences in both the mean state and susceptibility of liquid clouds between E3SM simulation and observed counterparts, for example, a greater COD susceptibility to CCN in the E3SM than observation. The potential causes of the differences are analyzed and discussed.
Although the scope of this study is focused/limited, e.g., susceptibility in a particular model in a small region, the methods used in this study and the lessons learned are useful for the user community of E3SM and to the broad community, especially ESM modelers. The manuscript provides in-depth analyses of the model simulations, a bit overwhelming though, and strives to obtain process-level understanding of the potential problems in the model and factors causing observation-model differences. Overall, it is an illuminating paper with useful information, that should be accepted for publication. On the other hand, some parts of the paper are hard to follow, especially for readers not familiar with ESMs. In addition, a few questions, and comments, a listed below, need to be addressed and clarified in the revised paper.
Major comments:
- The motivation and objectives for this study, and its potential significances need to be better explained in the Introduction and echoed in the other parts of the paper. I understand that, to the developers and main users of the E3SM, a comparison study like this is important to understand the performance of the model and the diagnose the potential problems. I also understand that DOE-ARM program has a permanent site in the ENA region which us probably why the study is limited to ENA region. But I guess a significant portion, if not the majority, of the readers are not aware of this background information. So, I would suggest adding some background information to the Introduction about the E3SMv1 model, e.g., why it is an important model, whether it is widely used, why should we care about v1 when other versions are available etc., as well as the significance of the ENA region for aerosol-cloud interactions studies. It will help the readers understand the significance of this study and see potential relevance of this paper to their own research.
- It seems that the assumption of cloud adiabaticity is an important factor causing the observation-model differences. But I don’t think I understand the method and logic for scaling the surface retrievals based on 80% adiabaticity assumption in section 4.1.1 (to me this section is quite confusing). If my understanding is correct, the cloud adiabaticity is diagnosed from the adiabatic cloud LWP (which is estimated based on cloud base T and P and cloud thickness) and observed “true” LWP. As such, the surface-based estimate of Nd can account for the variability of cloud adiabaticity, therefore arguably more accurate. How is it justified to scale a more accurate measurement based on a simple assumption? Wouldn’t a scaling of satellite observations based on surface estimate of adiabaticity be more reasonable? Moreover, the cloud base, top and depth retrievals based on ground based lidar and radar should be pretty accurate. I don’t understand what the basis is for scaling the cloud depth in the computation of Nd. To me a reasonable pathway for “apple-to-apple” comparison is to derive an adiabaticity climatology (e.g., monthly mean) or parameterization scheme that can be used to scale satellite observations.
- It is also not clear to me how model simulations are scaled based on 80% adiabaticity assumption. As discussed later in the paper, E3SM severely underestimates cloud adiabaticity. But once the adiabaticity is changed, it seems to affect everything, not only Nd, but also LWP to COD to Reff and to cloud albedo. Are they all adjusted based on 80% adiabaticity and if so, how? Otherwise, if only Nd is adjusted, then it adjected Nd is no longer physically consistent with other quantalities right? Overall, I’m confused by the objective for adiabaticity-based scaling. Is it intended to demonstrate retrieval uncertainty or model uncertainty?
- It seems from Figure 4 that the LWP statistics is simulated reasonably well in the model. However, the discussion in Section 4.2.1 suggests the cloud adiabaticity is underestimated. Does this mean that the clouds are too physically thick in the model, or the cloud base heights are biased? Is it possible to compare the model simulated cloud base and thickness with, for example, ground observations? Sorry if I missed it.
- In Figure 17, the overall cloud albedo susceptibility to CCN (dA/dlnCCN) is positive according to observations (both satellite and surface) whereas it is negative in the model if E3SM is sampled using TOA perspective. This seems to be an interesting and potentially important results. But there isn’t much discussion about it. What is the cause for that (too negative LWP responses in the model?)?
- Another thing I failed to understand about Figure 17 is that the uncertainty bar (95% confidence interval) is larger in the observation than the model, but the correlation coefficient (r-value) is significantly smaller in the model than observation. Are the two related and results self-consistent?
- The exact meaning of “Twomey effect” is vague and sometimes confusing in the paper (e.g., Figure 17 and 18). Does it simply mean . I would suggest using equations to be clear.
Minor comments:
- Around line 90: Zhang et al. (2022) used the joint PDF of LWP and Nd to understand the microphysical control and spatial-temporal variability of warm rain probability. It should be cited here along with other recent studies.
- Line 140: “why” should be “what”
- Line 396: Some caveat should be noted when claiming that the simulation of negative LWP response in the E3SM is a “success” because the discussion later seems to suggest that to be a result of error cancelation (i.e., right for the wrong reason).
- Finally, I applaud and appreciate the effort of the authors to make all the data and scripts used in this study publicly available (see Code and Data Availability session).
Reference:
Zhang, Z., Oreopoulos, L., Lebsock, M. D., Mechem, D. B., & Covert, J. (2022). Understanding the microphysical control and spatial-temporal variability of warm rain probability using CloudSat and MODIS observations. Geophysical Research Letters, 49, e2022GL098863
Citation: https://doi.org/10.5194/egusphere-2023-998-RC2 -
AC1: 'Response to reviewers', Adam Varble, 07 Sep 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-998/egusphere-2023-998-AC1-supplement.pdf
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
298 | 127 | 17 | 442 | 32 | 10 | 9 |
- HTML: 298
- PDF: 127
- XML: 17
- Total: 442
- Supplement: 32
- BibTeX: 10
- EndNote: 9
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Po-Lun Ma
Matthew W. Christensen
Johannes Mülmenstädt
Shuaiqi Tang
Jerome Fast
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(18517 KB) - Metadata XML
-
Supplement
(11140 KB) - BibTeX
- EndNote
- Final revised paper