the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
EarthCARE Cloud Profiling Radar Observations of the Vertical Structure of Marine Stratocumulus Clouds
Abstract. Launched in May 2024, the EarthCARE Cloud Profiling Radar (EC-CPR) provides enhanced sensitivity, finer vertical and horizontal resolution, and greatly reduced surface clutter contamination compared to its predecessor, the CloudSat's CPR (CS-CPR). These improvements enable more accurate detection and characterization of the vertical structure of marine low-level clouds. This study presents the first year of EC-CPR observations of stratocumulus (Sc) clouds over the Southeast Pacific and Southeast Atlantic Oceans.
The analysis of EC-CPR clear-sky profiles and comparisons with airborne radar data confirm that surface clutter is effectively suppressed above 0.5 km. Comparisons with CS-CPR data from 2007–2008 show that EC-CPR detects nearly double the Sc amount relative to CS-CPR in the regions of study. When a columnar maximum reflectivity (ZMAX) threshold of −15 dBZ is used to flag raining profiles, CS-CPR is found to underestimate rainfall occurrence by up to ~ 20 % relative to EC-CPR.
Using a steady-state one-dimensional drizzle model, the impact of the point target response (PTR) on EC-CPR reflectivity profiles in Sc clouds is examined. PTR causes vertical stretching of radar-detected cloud boundaries, resulting in an overestimation of cloud thickness by approximately 0.4–0.5 km in drizzling clouds. Additionally, PTR induces parabolic shaping of reflectivity profiles regardless of drizzle presence, complicating the distinction between drizzle-free and drizzle-containing clouds. These findings underscore the need for cautious interpretation of radar reflectivity profiles and suggest the incorporation of additional constraints, such as Doppler velocity and path-integrated attenuation (PIA) to improve future drizzle detection strategies.
- Preprint
(2079 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 02 Jan 2026)
- RC1: 'Comment on egusphere-2025-5421', Matthew Lebsock, 02 Dec 2025 reply
-
RC2: 'Comment on egusphere-2025-5421', Roger Marchand, 19 Dec 2025
reply
Review of egusphere-2025-5421
Title: EarthCARE Cloud Profiling Radar Observations of the Vertical Structure of Marine Stratocumulus Clouds
Authors: Zhuocan Xu, Pavlos Kollias, Susmitha Sasikumar, Alessandro Battaglia, and Bernat Puigdomènech Treserras
Overview:
In this manuscript, Xu and coauthors compare and contrast the first year of EarthCARE radar observations with those from CloudSat for stratocumulus over the Southeast Atlantic (SEA) and Southeast Pacific (SEP) Ocean. It is a nice paper, documenting and explaining key differences between the observing systems.
Recommendation: Accept minor revisions.
General Comments:
1) Use of 1D model
I am unclear as regards how you generated the modeled points in Figures 6 and 7. Please describe this activity in sufficient detail that another researcher could reproduce your results. A variety of specific questions follow (lines 241-251).
2) Data availability
Maybe I’m just missing it, but where can I get copies of the EC-CPR data (and other “inputs” to your analysis), as well as your derived-data. As regards the later, I would like to have access to the PTR function, at a minimum.
Specific Comments:
Line 10. Change “finer vertical and horizontal resolution” to “finer horizontal resolution and finer vertical sampling”. (Afterall, the vertical oversampling and Point-Target-Response are major aspects of the paper).
Line 16. I think the sentence, “CS-CPR is found to underestimate rainfall occurrence by up to ~20%” is misleading as this value applies only to the case where Zmax is located between 500 and 750 m above the surface. (See related comments and suggestions for lines 322). I suggest changing to 10% (so it encompasses all drizzle) or make clear the height-restriction.
Line 62. CloudSat minimum detectable signal varied significantly over the course of the mission being close to -30 dBZ at the start and ending near -25 dBZ. A cloud mask threshold of 20 is roughly equivalent to -28 dBZ in the first couple years of the mission. I am not sure this is documented anywhere in the peer-review literature, though I will do so in my next publication on this topic. As-is, I invite you to use a sentence along the lines of “ … whereas CloudSat’s MDS varied over the course of the mission being close to -30 dBZ in 2007 and ending near -25 dBZ in 2020 (personal communication, Roger Marchand).
Line 63. Of course, 2.3 is just the along track factor. Perhaps clarify and add something about the field-of-view area (which must be what ~ 5 times smaller)?
Line 101. “Not reflectivity” is awkward. Perhaps “no measured reflectivities”.
Line 116. I am a little surprised you were able to use a 1 x 1 degree grid with the CloudSat data. The fix-repeating orbit yields a ground-track spacing that is more than 1 degree between tracks at the equator. If you look at the number of CloudSat samples in each 1x1 degree bin, is there anywhere close to uniform coverage. In general, you might comment on the average number of orbits (transects) contributing to the data in each 1x1 degree grid to provide some sense for the sampling uncertainty in the cloud fractions shown here.
Line 120. “Dominance”?? Perhaps “… cloud fraction is lower consistent with well-known tendency of the extensive Sc cloud decks in this region gradually breaking …”
Line 120. What about near the coast? Presumably the lower CTH is a problem for both sensors (but especially CloudSat) as one approaches the coast.
Line 122. Perhaps show (or put in a supplement) the mean “echo / cloud top height (CTH)” fields, rather than “not shown”.
Line 147. Is the receiver linear over the full range (+35 to -30 dBZ)?
Line 161. What about drizzle, adiabaticity, sub-cloud RH? I think you should provide all the inputs needed for the 1D model (so that in principle another researcher could reproduce your results).
Line 166. Enhanced relative to what? I was confused when I initially read this sentence because I assumed enhanced was going to mean with PTR relative to without PTR. Perhaps rephrase, to "In general, larger LWP leads to larger cloud reflectivity (for the same Nd), and likewise smaller Nd leads to larger cloud reflectivity (for the same LWP). For Nd this is because ...”.
Line 180. This is true. But it might also be worth noting that when drizzle is present Zmax is located below cloud base, and one should NOT interpret the altitude where Zmax occurs (Hmax) as the location of cloud base. Nor is Hmax necessarily the location where precipitation or total water content is largest.
Line 180. I am not sure I think the algorithm would be "straight forward" = “uncomplicated and easy to do or understand”. In particular, if one doesn't know the cloud thickness, adiabatic factor, RH. But if this is your opinion, so be it.
Line 196. Perhaps change “ground-based observations” to “radar observations with high vertical resolution (short pulse)”
Line 221. I presume the ER-2 was flying near 20 km, such that the sensitivity at the cloud level will be around -24 dBZ, yes? While it would be "OK" to just give the altitude of the aircraft, I think better to give the sensitivity for the clouds shown (rather than for an arbitrary 10 km distance).
Line 221. How close (in space/time) are the ground tracks shown here? Clearly, they can't be too far off.
Figure 5. Caption is incomplete.
**Lines 241-251.
A) What is "all"? I presume this relates to the ranges of Nd and LWP specified near lines 130 & 160.
B) Similar to comment for line 161, What about drizzle, adiabaticity, sub-cloud RH? The earlier description of the model also suggests five parameterizations for autoconversion or accretion. Is the model being “fit” to observed profiles or found via some “nearest profile in the library of curves”? If yes, which parameters are fit and which are fixed? I’m guessing that LWP and Nd are nearest values in the table (without interpolation). Broadly, please provide a more detailed (and nominally mathematical) description of what has been done here (see general comment #1).
C). Is the layer thickness is being used as an “input” in the retrieval? If yes, what do you do when the layer base reaches 500 m (and you can no longer determine the layer thickness)? More broadly, how is “ground clutter considered”?
D) For the example shown in Figure 5, how well does this “retrieval” of the layer thickness compare with ER2 CRS?
Line 255. I think you meant to write “… can capture ONLY … "? Perhaps the larger point that needs to be expressed here is that a 1D model can't fully represent the range of observed reflectivity profiles?
Figure 7(b). This figure does not get much discussion and I am not clear why you have opted to include such. (Perhaps explain to readers why this is of value).
Line 280. I know this is just a pet peeve on my part and the term CFAD is in common usage, but I prefer (and recommend) using “reflectivity-height histogram.” Are the data or the plot actually “contoured” in any way?
Line 285. The “cloud fraction is … 40% or 20%”. Is this the "volume fraction" in CFAD or the fraction of columns with a detection (which is of course NOT calculated from the CFADs). Please clarify.
Figure 8. What is the bin spacing used here? (Perhaps put in the caption).
Figure 9. It seems odd to give the cloud fraction (which I take to mean fraction of columns with a radar detection for any Zmax) in percent, but the drizzling fraction NOT in percent. Suggest give drizzle fraction in percent.
Line 318. While resolution/sampling is a factor, the peak in the CS-CPR near -30 dBZ is likely due more to noise. That is, noise sometimes increases reflectivity, and resolution volumes with a signal that is just below the nominal noise floor sometimes experience an increased in total power are therefore more likely to identified as containing clouds. The uncertainty in the reflectivity factor for a detection at the noise floor is essentially 100%. As the EC-CPR data show there IS a large volume with clouds just below the ability of CloudSat to detect AND so a large volume where the noise can lead to detections with noise-enhanced reflectivity factors (leading to a bump).
Line 322. While it is true that 20% are missing in this category, this category represents less than 1/3 of the drizzle. I think it would be better to note both 20% for the lowest altitude category and a value for the total loss. That is, perhaps add “… (3% to 2.4%); and overall CS-CPR misses ~10.5% of all drizzling Sc relative to EC-CPR (11.4% to 10.2%) ." (The later numbers come from summing the three categories).
Line 325. Perhaps “similar” rather than “same”.
Line 356. This is an excellent point, and perhaps add "… and the Zmax does not necessarily maximize where the precipitation is most abundant."
Citation: https://doi.org/10.5194/egusphere-2025-5421-RC2
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 252 | 84 | 28 | 364 | 24 | 21 |
- HTML: 252
- PDF: 84
- XML: 28
- Total: 364
- BibTeX: 24
- EndNote: 21
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
General Comments:
This paper presents some early performance characteristics of the EarthCARE cloud profiling radar with regard to detection of of hydrometeors in marine stratocumulus clouds. EarthCARE performance is shown to be a notable advance over CloudSat in respect to detection sensitivity and surface clutter suppression. Marginal improvements in the detection of precipitation (drizzle) are shown as well. The paper is timely - EarthCARE is new and a good reference specific reference relative to StCu is warranted. The presentation is generally of a high quality and the methods are appropriate. I only have a few minor comments listed below to be addressed.
Specific comments:
Line 240: 2.5 km should be 1.7 km. See Tanelli et al. 2008, Table 1.
Line 128: add ‘the’ before ‘model’.
Subsection numbering is messed up. There are two 2.1 and two 2.3 sections but no 2.2!
Figure 4: I think you could probably make this figure more compelling. I think it would help to add both an EarthCARE and CloudSat example of a thin non-precipitating cloud with cloud top at or below 1 km. There are lots of examples where CloudSat has only one or two bin of reflectivity where EarthCARE might see significantly more detail. I think you envision your panel A as showing a marginal cloud but this is actually a fairly thick StCu.
Line 220: note that this field campaign included coordinated under flights of EarthCARE here.
Lines 240-247: This paragraph describes model results. Does it belong here? I would put this back in your section 2.3 (the one that discuss the model results.
Lines 248 – 276. You might want to include a sentence or two before this discussion to describe why you are showing these results. I think you are trying to identify a multi-variable relationship with precipitation that goes beyond a simple reflectivity threshold. I also think your results show that this is hard to do and there is likely inherent uncertainty in cloud/precipitation identification. Maybe add a little discussion of that fact.
Line 280: What CloudSat years. The MDS changed by about 6 dB over the course of the mission which would significantly influence the pdf’s in figures 8 and 9.
Figure 9: I’m confused about two aspects of this figure. The CS pdf’s don’t show detections smaller than about -26 dBZ (related to question above). There is no reason that EarthCARE should detect more -15 dBZ clouds than cloudsat at altitudes above 750 m – but panels b and c show this. Why? Is it just that the time period sampled is different?
Line 320: So CS misses 20% of the EC precip detections at this height bin. Can you also add for reference what fraction of EC radar shots contain precip?
Section 3.2: You should add a bit more analysis to this section. First it would be useful to include the total fraction of radar shots with a StCu hydrometeor detection in each of the two regions for both EarthCARE and CloudSat. Second I would add a plot that shows the vertical profile of the hydrometeor detection fraction from each sensor.
Referencing in the intro is a little thin. Here are some (not a comprehensive list) to add:
Tanelli et al., "CloudSat's Cloud Profiling Radar After Two Years in Orbit: Performance, Calibration, and Processing," in IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 11, pp. 3560-3573, Nov. 2008, doi: 10.1109/TGRS.2008.2002030
Wood, R., T. L. Kubar, and D. L. Hartmann, 2009: Understanding the Importance of Microphysics and Macrophysics for Warm Rain in Marine Low Clouds. Part II: Heuristic Models of Rain Formation. J. Atmos. Sci., 66, 2973–2990, https://doi.org/10.1175/2009JAS3072.1.
Wood, R., D. Leon, M. Lebsock, J. Snider, and A. D. Clarke (2012), Precipitation driving of droplet concentration variability in marine low clouds, J. Geophys. Res., 117, D19210, doi:10.1029/2012JD018305.
L'Ecuyer, T. S., W. Berg, J. Haynes, M. Lebsock, and T. Takemura (2009), Global observations of aerosol impacts on precipitation occurrence in warm maritime clouds, J. Geophys. Res., 114, D09211, doi:10.1029/2008JD011273.
Mülmenstädt, J., Salzmann, M., Kay, J.E. et al. An underestimated negative cloud feedback from cloud lifetime changes. Nat. Clim. Chang. 11, 508–513 (2021). https://doi.org/10.1038/s41558-021-01038-1