the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluating spectral cloud effective radius retrievals from the Enhanced MODIS Airborne Simulator (eMAS) during ORACLES
Abstract. Satellite remote sensing retrievals of cloud effective radius (CER) are widely used for studies of aerosol/cloud interactions. Such retrievals, however, rely on forward radiative transfer (RT) calculations using simplified assumptions that can lead to retrieval errors when the real atmosphere deviates from the forward model. Here, coincident airborne remote sensing and in situ observations obtained during NASA’s ObseRvations of Aerosols above CLouds and their intEractionS (ORACLES) field campaign are used to evaluate retrievals of CER for marine boundary layer stratocumulus clouds and to explore impacts of forward RT model assumptions and other confounding factors. Specifically, spectral CER retrievals from the Enhanced MODIS Airborne Simulator (eMAS) and the Research Scanning Polarimeter (RSP) are compared with polarimetric retrievals from RSP and with CER derived from droplet size distributions (DSDs) observed by the Phase Doppler Interferometer (PDI) and a combination of the Cloud and Aerosol Spectrometer (CAS) and two dimensional Stereo probe (2D-S). The sensitivities of the eMAS and RSP spectral retrievals to assumptions on the DSD effective variance (CEV) and liquid water complex index of refraction are explored. CER and CEV inferred from eMAS spectral reflectance observations of the backscatter glory provide additional context for the spectral CER retrievals. The spectral and polarimetric CER retrieval agreement is case dependent, and updating the retrieval RT assumptions, including using RSP polarimetric CEV retrievals as a constraint, yields mixed results that are tied to differing sensitivities to vertical heterogeneity. Moreover, the in situ cloud probes, often used as the benchmark for remote sensing CER retrieval assessments, themselves do not agree, with PDI DSDs yielding CER 1.3–1.6 µm larger than CAS and CEV roughly 50–60 % smaller than CAS. Implications for the interpretation of spectral and polarimetric CER retrievals and their agreement are discussed.
- Preprint
(12774 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
EC1: 'Comment on egusphere-2024-2021', Paquita Zuidema, 02 Sep 2024
As a member of the larger research community interested in this topic, I read the manuscript, and decided to post thoughts/questions I had as I read it. I hope these can be useful.
The introduction would benefit from a stronger tie-in to the cloud-aerosol interactions motivating ORACLES, currently it is highly general. Specific motivations include smoke-cloud microphysical interactions; the time evolution in CER towards discriminating the boundary-layer semi-direct effect from the microphysical interactions; the impact of intervening absorbing aerosol on the CER retrievals.
Overall I was surprised to not see more language about overlying smoke in the manuscript and its cases. Did I miss something? Was this not an issue?
More specific comments:
Paragraph spanning lines 56 to 69: why aren’t SSA assumptions of the overlying smoke on the CER retrievals mentioned here?
In the case of ORACLES the overlying smoke layer was also humid enough to support clouds at times near the top of the aerosol layer. The RSP also provided useful retrievals of these mid-level cloud properties, as shown in Adebiyi et al 2020. This paper would be worth citing, perhaps at the end of line 172 or as an example of CER retrievals helping to determine aerosol-cloud interactions in the ORACLES regime.
Line 141: what is the lower limit of the drop size detected by CAS? Is it also 3 micron? It is also my understanding that the smallest dropsize bin of the PDI was too noisy to use, perhaps because of interference from the airplane’s electrical system. Can the ORACLES PDI PI comment? For the thin, polluted clouds sampled during ORACLES, the lower size limits will be important. How did the Nds compare?
Section 2.4: continue to be puzzled why the SSA of the overlying smoke layer isn’t mentioned as an additional uncertainty within the eMAS retrieval.
Top of p. 10: eMAS issues seem like they belong in data section.
Fig 1 caption: provide total # of flights in each category.
Line 419: in combination, for LWP, the bias is about 10% (Grosvenor et al. 2018).
Figs 7-9: What was the AOD overlying the cloud in both 20 Sept cases? Same question applies to the 14 Sept case later on.
Fig. 10: are the cloud DSDs unimodal throughout the vertical column? Might be good to examine the actual DSDs near cloud top to understand the differences better. Maybe the issue is that the 2DS is having trouble picking up drops near the smaller end of its range, so that the DSD looks more bimodal within the CAS/2DS combined distribution. Is there any evidence of coincidence undercounting by the CAS? It would also be good to show the vertical profiles in LWC, allowing you to add in the bulk King-derived LWC. How thermodynamically well-mixed was the cloud layer?
Table 2: why are the polarimetric optical depths so low?
p. 41: how confident are you in the CO2, CH4, H2O above-cloud optical depths/loadings? If I understand correctly these are coming from a reanalysis - is that MERRA2? You mention doubling the concentrations, these might bring you closer to the actual values. How do the reanalysis H20 values compare to those measured on the aircraft? This error source should be genuinely quantifiable using the ORACLES H2O in-situ dataset in contrast to the statement made on line 909. THis may be addressed in Pistone et al 2024.
p. 44: more effort can and should be made to understand the differences between the two probes. How about picking a long level incloud-leg, ideally near cloud top, and comparing an average of the full DSD. Is there evidence of bimodality? How does Nd compare? What is the flow volume of the 2 probes? Is the sensitivity to the smallest drop sizes the same?
Section 5: why isn’t the aerosol overlying the cloud mentioned for these cases? Could it be neglected? Did you select 2 cases with no overlying AOD? I don’t recall reading that and wonder if I missed it.
Section 6: Stronger guidance should be provided here to help potential users of the publicly-available eMAS cloud property retrievals. Which one would you recommend?
References:
Adebiyi, A. A., P. Zuidema, I. Chang, S. P Burton and B. Cairns, 2020: Mid-level clouds are frequent above the southeast Atlantic stratocumulus clouds. Atmos. Chem. Phys., 20, p. 11025-11043, 10.5194/acp-20-11025-2020Grosvenor D. P., et al 2018: Remote sensing of droplet number concentration in warm clouds:
A review of the current state of knowledge and perspectives. Rev. in Geophys., 56, p. 409-453, doi:10.1029/2017RG000593K. Pistone, et al 2024: Vertical structure of a springtime smoky and humid troposphere over the Southeast Atlantic from aircraft and reanalysis. Atmos. Chem. Phys., 24, pp. 7983–8005, doi:10.5194/acp-24-7983-2024
Citation: https://doi.org/10.5194/egusphere-2024-2021-EC1 -
AC3: 'Reply on EC1', Kerry Meyer, 14 Nov 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-2021/egusphere-2024-2021-AC3-supplement.pdf
-
AC3: 'Reply on EC1', Kerry Meyer, 14 Nov 2024
-
RC1: 'Comment on egusphere-2024-2021', Anonymous Referee #1, 04 Sep 2024
This paper provides comprehensive intercomparisons for the cloud effective radius (CER) retrievals obtained from different instrumentations and various retrieval techniques (e.g., bi-spectral and glory retrievals from eMAS, polarimetric retrievals from RSP, and in-situ retrievals from cloud probes) based on two case studies using airborne observations from NASA ORACLES field campaign. The CER retrievals not only disagree themselves instrument-by-instrument and method-by-method, but also show different discrepancy patterns case-by-case, which reveals the challenges on accurately “measuring” cloud microphysics. The assumptions involved in each measurement-based method are discussed in detail (e.g., water refractive index, effective variance of droplet size distribution etc.) to explain the retrieval discrepancies. This paper is very well written and provides invaluable insights on the remote sensing of cloud microphysics, which guides future development of satellite CER retrieval. The paper well suits the scope of AMT ORACLES special issue, and I recommend publication with minor revision.
General comments:
- P11L324 – P11L330: The technique that utilized instrument-A (here RSP) to cross calibrate instrument-B (here eMAS) is very valuable to the community. I suggest [NOT MANDOTORY] adding an appendix to describe the technique in detail (for example, how to deal with different sensor geometries that might involve different attitude correction [uncertainty might become larger than radiometric uncertainty], instrument line shapes etc.) to make the technique 1) citable and 2) generally applicable to cross-calibration for other instruments.
- Based on the AMT reference guideline,
If the author's name is part of the sentence structure only the year is put in parentheses ("As we can see in the work of Smith (2009) the precipitation has increased")
E.g., P8L242 (Platnick, 2000) to Platnick (2000), P8L248 (Gupta et al., 2022b), P9L269 (Gupta et al., 2022b) etc. Additionally, please make the reference order consistent throughout the paper (e.g., when multiple references are used, oldest reference goes first).
- From measurement perspective, accounting for the vertical heterogeneity (z direction) of clouds while constraining the horizontal heterogeneity (xy direction) is quite challenging (which might explain the different results between sawtooth and ramp cases). I wonder what the author think is the better approach? Will the square spiral into the clouds help the constraint of horizontal heterogeneity of clouds?
Minor comments:
P5L134: What does “effective pixel size” mean? Field of view at different angles?
P5L154: change to “have heritage with the cloud products … from MODIS science team” or something similar
P8L226: “sun/satellite” to “sun/sensor”
P8L227: Since at P41L886 the above-cloud gaseous absorption corrections were discussed, it would be good if some clarifications on the definition of “top-of-cloud reflectance” (whether gases and/or aerosols are taken into account) here.
P11L308: change “with some degree of confidence” to “(with some degree of confidence)”
P15L385: In Figure 4, rightmost panel for CER (3.7 mm), a very different pattern (bright potion, relatively low retrieval uncertainty) shown up on the upper side, what can be the causes?
P21L475: It looks like the CASshifted was moving into the right direction to agree with PDI. Is it possible to use a different constraint to shift size bins so CAS matches PDI and claim that the in situ can achieve agreement? Or the constraint will become nonphysical?
P33L751: “misleadingly intensifying the appearance of scene heterogeneity”: I think this statement is valid for visible. The appearance of heterogeneity should be solid as they come from the NIR or SWIR channels. In return, the cloud optical thickness (visible) will not contain large 3D bias, but CER (NIR) might still suffer from the 3D effects.
P36L807: “10mm” to “10 mm” (adding a space in-between)
Citation: https://doi.org/10.5194/egusphere-2024-2021-RC1 -
AC1: 'Reply on RC1', Kerry Meyer, 14 Nov 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-2021/egusphere-2024-2021-AC1-supplement.pdf
-
RC2: 'Comment on egusphere-2024-2021', Anonymous Referee #2, 06 Sep 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-2021/egusphere-2024-2021-RC2-supplement.pdf
-
AC2: 'Reply on RC2', Kerry Meyer, 14 Nov 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-2021/egusphere-2024-2021-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Kerry Meyer, 14 Nov 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
267 | 80 | 89 | 436 | 11 | 13 |
- HTML: 267
- PDF: 80
- XML: 89
- Total: 436
- BibTeX: 11
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1