the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Discriminating between "Drizzle or rain" and sea salt aerosols in Cloudnet for measurements over the Barbados Cloud Observatory
Abstract. The highly sensitive Ka-band cloud radar at the Barbados Cloud Observatory (BCO) frequently reveals radar reflectivities below -50 dBZ within updrafts and below the cloud base of cumulus clouds. These so-called haze echoes are signals from hygroscopically grown sea salt aerosols. Within the Cloudnet target classification scheme, haze echoes are generally misclassified as precipitation ("Drizzle or rain"). We present a technique to discriminate between "Drizzle or rain" and sea salt aerosols in Cloudnet that is applicable to marine Cloudnet sites. The method is based on deriving heuristic probability functions utilizing a combination of cloud radar reflectivity factor, radar mean Doppler velocity and ceilometer attenuated backscatter coefficient. The method is crucial for investigating the occurrence of precipitation and significantly improves the Cloudnet target classification scheme for the measurements over the BCO. The results are validated against the amount of precipitation detected by the Virga-Sniffer tool. We analyze data for the measurements in the vicinity of the BCO covering two years (July 2021–July 2023) as well as during the ElUcidating the RolE of Cloud–Circulation Coupling in ClimAte (EUREC4A) field experiment that took place in Jan–Feb 2020. A first-ever statistical analysis of the Cloudnet target classification product including the new "haze echo" target over two years at the BCO is presented. In the atmospheric column above the BCO, “Drizzle or rain" is on average more frequent during the dry season compared to the wet season, due to the higher occurrence of warm clouds contributing to the amount of precipitation. Haze echoes are identified about four times more often during the dry season compared to the wet season. The frequency of occurrence of "Drizzle or rain" in Cloudnet caused by misclassified haze echoes is overestimated by up to 16 %. Supported by the Cloudnet statistics and the results obtained from the Virga-Sniffer tool, 48 % of detected warm clouds in the dry and wet season precipitate. The proportion of precipitation evaporating fully before reaching the ground (virga) is higher during the dry season. During EUREC4A, precipitation from warm clouds was found to reach the ground more frequently over the RV Meteor compared to the BCO.
- Preprint
(3666 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-894', Anonymous Referee #1, 02 Jul 2024
The authors present a new method to detect hygroscopically grown sea salt aerosol below Trade wind cumuli bases based on the Cloudnet suite of remote sensing instruments. The method improves the Cloudnet target classification scheme which currently mis-labels grown aerosols (haze) as „drizzle or rain“. After summarizing state-of-the-art and presenting the instrumentation at Barbados Cloud Observatory (BCO) as well as the Cloudnet algorithm, the authors introduce their new classification methodology as well as a cloud type classification algorithm.
Applying their method to 2 years of BCO measurements and the EUREC4A period, they perform a statistical analysis of Cloudnet target classification occurrence, analyze controlling environmental factors, and evaluate the performance of their new classification class with labels obtained from the Virga-Sniffer method available in Kalesse-Los et al, 2023. The authors also present statistics of precipitation and virga at BCO, and discuss limitations of their new method.
The paper is well written and I acknowledge the early career status of the first author. I am making some general and more specific comments below to help clarifying the main message of the paper, and strengthening the presented argumentation.
General comments:
- GC1: The introduction summarizes that radar reflectivity thresholds are often used to exclude haze echoes from drizzle/rain occurrence analyses (L 71-73). The manuscript in its current state, however, does not clarify in what way the new haze category in Cloudnet improves or changes occurrence statistics compared to the Ze-threshold method (eg Klingebiel et al, 2019). I would propose to add a comparison to the analysis Section (also see comment below).
- GC2: The authors need to clarify where to find the data sets that they used for their analysis. Many datasets, especially those obtained during the EUREC4A period, are publically available with dois; were these data sets used?
- GC3: Sec 3.1, L247 - 259: I am missing the reasons for the choice of µ, sigma, beta, and a justification for how they were optimized and set. The authors should clarify how sensitive the analyses are to these settings including the probability threshold of 60% (L261). How do users of the method would need to adjust these settings for different maritime Cloudnet sites (or is it “plug and play“?)?
- GC4: I find the argumentation line of the analysis at times hard to follow. In order to be more convincing of the new method, I would propose to adapt the structure as follows:
- 4 i) apply method to BCO statistics and evaluate new approach by comparing to Virga-Sniffer and traditional -50dBZ threshold in order to highlight benefits of new classification scheme for haze detectio; include an analysis of limitations using spectra/skewness (also see Specific comments below) and sensitivity to parameters (see comment above)
- 4 ii) analyze driving factors of haze occurrences in water vapor or subsidence space (see comment below)
- (optional:) 4iii) make use of Cloudnet and Virga-Sniffer to analyze virga and precip statistics at BCO given the improved detection scheme excluding mis-labeled haze)- GC5: the main message of the paper should be clarified throughout the manuscript. Is the main scope of the manuscript to introduce a new Cloudnet classification scheme? Or, rather, to analyze rain and virga characteristics at BCO given an optimized detection method? Abstract and introduction rather focus on the novel Cloudnet method, while the main scope of the analysis Section seems to focus on BCO statistics.
Specific comments:
- L 21-26: The importance of evaporation and moistening processes in the sub-cloud layer for cloud and precipitation evolution should be highlighted here.
- L24: A reference should be added.
- L177: To my knowledge, operation of the CORAL Radar and ceilometer continued at BCO after the EUREC4A campaign (http://bcoweb.mpimet.mpg.de/systems/data_availability/DeviceAvailability.html, last access July 2, 24)
-L178: the authors should clarify why this data is not usable and why timestamps cannot be corrected in post-processing.
- Sec 2.2.1 and Sec. 2.2.2; L 569: it remains unclear to me throughout the manuscript how MWR and MRR data impact the classification algorithm. Are they mandatory for the new classification class? The HATPRO in operation at the time of analysis is the BCOHAT instrument as specified in Schnitt et al, 2024, ESSD (doi.org/10.5194/essd-16-681-2024) (reference missing)
- Fig 3: I would suggest to add boxes in (a) which illustrate the zoomed areas in (b) and (c); and to maybe re-configure the plot such that (a) is largest on the left side, and (b) and (c) are smaller and connected to boxes in (a)
- Fig 4 : in order to highlight the added value of the new classification class compared to the conventional Ze threshold method (L70), I would propose to add a panel on the top to illustrate the measured radar reflectivity.
- L205: arguments for why m and c were chosen this way should be added here. How sensitive is the clutter mask and resulting analysis and evaluation to these values? I suspect that the presented evaluation of the Cloudnet class with the Virga-Sniffer strongly depends on the values chosen here.
- L205: do the authors refer to the sensitivity limit of the CORAL radar? If so, this limit should scale with range, and should be negative. If not, a clarification is needed here.
- L232: a sentence should be added on how the insect detection scheme works and why it would be suitable for also detecting haze; this information should also be added to Sec 2.3. Why not including the Virga-Sniffer method to Cloudnet instead or in addition as it uses similar instruments? The advantages of the chosen method compared to the Virga-Sniffer should be highlighted.
- L290: doubles L261.
- L 311: more explanation is needed for why it would be important and interesting to split the analysis in the two cloud classes; this should be stated already in the introduction as well.
- Sec 4.1: Rather than splitting the analysis into dry and wet season statistics for each year, an occurrence analysis could be performed in subsidence or water vapor space for both years to exclude for example skewing wet intrusions in the dry season from the statistics. The EUREC4A period could be used to analyze driving factors in more detail, such as cloud organization type, cloud type, wind direction, wind speed, and to include the impact of Saharan dust events on haze occurrence (which is mentioned in L403 but not shown).
- Sec 4.3: The authors should clarify why they are comparing BCO and the Meteor observations; I am confused - did the authors also run Cloudnet based on the Meteor? If so, additional input is needed in Sec 2 and the introduction. Maybe the authors rather use the comparison to optimize the application of the Virga-Sniffer to BCO measurements in which case the text needs to be clarified to underline this.
- L425 and Fig 9: The text should comment on the large occurrence of ‘Unclassified‘ aboard the Meteor compared to the BCO; and should summarize why the difference between object- and profile-based statistics is particularly profound for the trade wind cumulus class in panel (b) compared to the warm clouds class in panel (a) (which is hinted at in L434, and extensively analysed in Appendix C, but should be summarized here)
- Sec 4.3.2 As I understand the analysis presented here, the Virga-Sniffer is applied to BCO measurements and occurrence statistics of virga, precipitation and clouds are analysed. I am not sure how this Section relates to the title of the manuscript, as the Cloudnet haze method is not included in this Section (also see GC 4 and 5)
- L476: it should be clarified if Virga-Sniffer results are shown, or Cloudnet classification results; also see comment above
- L502: Spectra and higher moments are available at MPI for the analysed period and should be used in the analysis to strengthen the proposed method, or, at least, to quantify the limitations more thoroughly. Could the classification scheme be adapted to include skewness as an additional proxy for detecting haze?
Technical Corrections:
- Fig 2: All colors seem to be related to 24h data coverage; the colorbar should be adjusted to enhance the Figure‘s message.
- Fig 8: Legend should be adjusted to distinguish solid and dashed line without reading the caption.
- Fig 9 caption: last sentence should be moved to main body text.
- Figs 7-10: description of colors shown in legends need to be added to the captions.
Citation: https://doi.org/10.5194/egusphere-2024-894-RC1 -
RC2: 'Comment on egusphere-2024-894', Anonymous Referee #3, 09 Aug 2024
REVIEW OF THE MANUSCRIPT: "Discriminating between "Drizzle or rain" and sea salt aerosols in Cloudnet for measurements over the Barbados Cloud Observatory"
authors: Johanna Roschke, Jonas Witthuhn, Marcus Klingebiel, Moritz Haarig, Andreas Foth, Anton Kötsche, and Heike Kalesse-Los
DOI: https://doi.org/10.5194/egusphere-2024-894
GENERAL COMMENT. The paper develops a new method to detect haze echoes in Cloudnet and it is a potentially powerful tool to be deployed in all Cloudnet stations over the ocean to reduce biases and errors in virga and precipitation detection. Quantifying correctly virga and precipitation over the ocean is a relevant scientific question and fits, in my opinion, within the scope of AMT. Moreover, I can envision useful applications in model evaluation. The algorithm is based on previous research methods and the presented approach is quite solid, verified on a long dataset from the Barbados Cloud Observatory. I generally like the approach, the language is fluent, formulas are well written, but I would recommend publishing after major revisions, which I suggest because I found that the methods could be more clearly outlined to better guide the reader in the technicalities of the approach. I think that some choices of parameters need a more solid justification and maybe some results need to be stated in a stronger outstanding way.
My major concerns are the following:
1) The paper aims to introduce a new method, but besides that, it provides a lot of detailed analysis of the virga sniffer method, which is the subject of a former publication by Kalesse-Los et al.. Especially in the appendix, many sections are given to provide insights on the virga sniffer, which is not the core of this publication. In this respect I see two possibilities: either include the new method as part of the virga sniffer method (a sort of part 2 of the former paper) or remove most of the virga sniffer-related analysis. I am saying this because it is confusing to follow the argumentation among the different methods used, and in the end, it is not clear what is the best configuration to use.
2) I think that you need to explain and justify the values you choose for the radar reflectivity, mean Doppler velocity, and ceilometer backscatter haze distributions that you use to calculate your probabilities, I could not find it. Moreover, I don't understand why, in the example case study of Figure 5, the method identifies haze echoes only below shallow cumulus clouds. In my understanding, the identification should be independent from the cloud presence, so I would love to see more case studies, and see details of the case shown as well. See the comment on the plot for some suggested analysis. I understand that in part of the paper, you then decide to focus on the detection of haze frequency for the different cloud types, but in general, why restrict to profiles where there is a cloud base only? and if this is not what you are doing, please go through the doubts and the descriptions because it is not understandable what you are doing.
3) The paper is full of very detailed discussions where the reader can easily get lost. My suggestion is to go over the comments and think if all the details are really needed and possibly reduce some of them for the sake of readability and understanding. I know that the authors who created the algorithms think that details are necessary, but for a reader who is not familiar with the algorithm, they are totally overwhelming.
These major concerns are detailed in many comments across the publication, which are also listed below. I apologize if some questions are recurrent, but I have been going throw the manuscript twice and I noticed that I asked the same questions at different points. However, I decided not to remove them, to show the authors where the doubts came from. Besides these points, there are some minor concerns, also indicated in the comments, that deal with missing sentences, wrong deductions, or statements, that can be found in the comments and need to be addressed. I would also suggest reconsidering the title after having tackled the major comment number 1.
Specific comments:
1) line 30: what is the minimum sensitivity of the radar at BCO? Can you refer to a CFAD Ze vs Height plot for the BCO site from some papers or add one? That would be nice.
2) line 57 (highlighted sentence): I would formulate as follows: The size at which cloud droplets transition to precipitation varies in the literature
3) line 54: Is this drizzle definition a definition you created or something from literature? please specify.
4) line 76: Maybe here you can add also that Cloudnet is used on multiple sites in the world, especially the target classification allows to compare statistically cloud properties in a homogeneous way from different sites, just to give value to the Cloudnet tool.
5) line 112: Is it possible to add a Cfad of Ze for the whole statistic of the data you use, instead of only mentioning the variation of the sensitivity with height? see also comment 1.
6) line 131: which retrieval? maybe cite what is used at BCO to retrieve LWP and IWV.
7) line 143: I tend to disagree. A Cloudnet station, in my experience, should include the 3 instruments needed for cloud profiling, i.e. cloud radar, aerosol lidar, and microwave radiometer, https://www.actris.eu/facilities/national-facilities/observational-platforms . Moreover, in this document, also a disdrometer is included, for a site to be a CCRES (Actris center for cloud remote sensing) https://www.actris.eu/sites/default/files/CCRES/CCRES%20Requirements%2010112022.docx.pdf
8) line 145: I would not define the MWR as an optional instrument, since it also provides IWV and LWP with lower uncertainty compared to the single channel retrieval which has to deal with the unknown impact of water vapor on the LWP estimation. Moreover, it has scanning options that allow monitoring of temperature and humidity fields around the site, which can become more and more relevant for model evaluation in the future.
9) line 165: What is the pixel size of Cloudnet? maybe it would be good to mention the range resolution and the time resolution which is also the one of your output.
10) line 175; I would put the reference at the beginning of the paragraph " In the Cloudnet target classification (Hogan and O'Connor, 2004).
11) line 188: why not use the same algorithm from (Tuononen et al., 2019) you mentioned before? is it a problem of resolution? How do they compare with the other ones?
12) line 190: what do you mean by "by more than the set of thresholds?" Can't you just say more generally " In multi-layer cloud situations CBH is assigned with a more complex processing (see Paper, appendix)? also, do we need to know in this paper the details of the multilayer cloud-based detection of virga sniffers? I would just cite the Virga sniffer paper.
13) line 201: why not a surface disdrometer? or rain gauge? isn't it available at BCO? At what height is the lowest valid MRR range gate used?
14) line 223: Is this corrected for air motion? might be tricky to use the mean Doppler velocity as a proxy for hydrometeor fall speed.
15) formula 1: from my understanding, this is the equation of the line that defines the clutter mask. if it is so, please state it clearly. I would find it easier to follow if you introduce the different masks that appear in the plot (vm mask and clutter mask) and then define how you identify the virga and the haze echoes given such masks. Virga mask appears at the beginning and then not anymore. it is a bit confusing to follow also because it is not clear the goal, which I think is to distinguish haze from virga. or? I hope these questions can help improve the clarity.
16) line 205: how do you determine m and c values? this is explained in the virga sniffer paper (eq 1 in par 3.4 of that paper) even if I did not find in that paragraph how you came up with the values of m and c. However, why repeat it entirely again here? Can't you make here a shorter and simpler summary, citing the paper published for details? I find this virga sniffer description very long and distracting from the topic of the paper, which is the other algorithm for sea salt /drizzle discrimination.
17) line 205: at which height? considering all heights? not clear
18) line 207: Is filtering out sea salt something you also do in Virga sniffer? or is this something you do for determining the prob thresholds you use in the alg of this paper? if it is the second, I would put it in the section of this new algorithm, with a subsection called "threshold determination". My main problem is that through the whole discussion, It is never very clear what parts are in the virga sniffer and what belongs only to the new algorithm.
18) line 209: are clutter c and clutter m the parameters c and m in eq 1? If yes, use the same formalism. If not, explain and explain also if they are linked. Still, unclear how you determine the actual values of the parameters.
19) line 223: but why do so if the cloud base in Cloudnet (Tuononen et al,. 2019) performs better? at least this is what is evident from your selected case study.
20) line 232: reference to the insect detection method missing.
21) figure 4: I don't understand the discrepancy in the cloud base from Cloudnet and the virga sniffer at 6:30, in precipitation conditions. To me it looks like the cloud base detected from Cloudnet is more reliable, based on Tuononen et al, 2019) so why not use it? I understand that it is on 30 s resolution and probably the Virga sniffer is a higher resolution, but this can be interpolated. The LCL base causes a big bias in the virga depth values you obtain. Or am I misunderstanding?
22) line 245. and around: In my understanding, the probabilities should depend on the parameters of the distributions of your observables for haze. Now, can we see what your distributions for Ze, Vd, and Beta look like for haze and also for the other hydrometeors, if possible? can you somehow justify your choices of the parameters mu and sigma? otherwise, we just have to believe. It would be also nice to see the same distributions for the Cloudnet classes, to understand how much different they are.
23) line 248: how did you decide such values?
24) lines 250 and 257: same question as before, how did you decide such values? maybe you can plot some distributions for all variables to show where are the values coming from? I am not sure I understand otherwise.
25) line 260: how do you define haze distributions for Ze, Vm, and Beta? I assume you define haze using the masks, but then it would be nice to see the distributions of all the variables, not just the case study.
26) line 264: From my understanding, the classification is pixel based. I think it would be something to highlight. Also, this probability does not depend on the cloud base, so potentially you can classify as haze pixels that do not have a cloud base above. So why is the plot of Figure 5 haze is found only below the cloud base? is this common in other case studies too?
27) figure 5: it would be nice to have a zoomed plot of the area from 7:00 to 8:00. My concern is this: if there is sea salt spray and the environment has humidity for sea salt condensation nuclei to grow, why do you see it only below the small shallow clouds? in theory you should see it on all timestamps in the sub cloud layer, independently of the cloud presence above, at least assuming that the humidity is homogeneous. Do you have a humidity profiler to display humidity profile time series and understand if, for some reason below the cloud base, there's a higher amount of humidity or some temperature variations making water vapor condensation easier? maybe displaying also IWV time series from the MWR can help, or some T, RH time series from the surface station in the worst case? Maybe you have an explanation to understand why you have such gaps? might be that my thinking makes no sense for some reasons.
28) line 291: I see that in the final plot, you used the cloud base from Cloudnet to detect virga around 6:30 and not the one from the virga sniffer tool (Fig 4b) So I did not fully get at the end which is the cloud base you use in virga sniffer from the description of the virga identification. Maybe you can improve the description?
28) section 3.2: I think that at the beginning of this paragraph, you need to explain that you want to study haze occurrence under a cloud base and investigate its frequencies for different types of clouds. this is why, I think, you introduce the cloud type classification out of the blue at the moment, which is not needed otherwise given that your haze identification is pixel-based depending on the probability assigned to a tuple of ze, vd, and beta.
29) line 310: Maybe you can have a subsection "object-based cloud classification". Somewhere before you also need to introduce the profile-based classification method, which is? Is it the Virga sniffer? it is unclear to me what it is that you later call the profile-based method. Please clarify it here, before using them in the results, and explain why you introduced them before, as suggested in the previous comment. My impression is that a good position would be at the beginning of this section 3.2. For both methods, please introduce the acronyms you use later here.
30) line 332: how do you operationally define the bounding box?
31) line 370: what is the link between haze echo occurrences and cloud base height? not very clear (see also points above)
32) section name 4.3: I would call it "Method validation using Virga sniffer". It would be nice to find a name for the method presented in the paper..
33) section 4.3: It is very confusing for the reader to understand an evaluation that is based on different cloud definitions, and different methods to identify clouds. Is it so crucial to add here and not in the appendix the comparison with ship obs? I think that by presenting the evaluation only on BCO data would give value to the analysis of the long-term statistics, without putting too many things to see on the plate. I would compare with Meteor in a different section, maybe in the appendix. It is very difficult to follow in this way because of all these differences, which for you are obvious because you created the algorithms, but can make readers get lost and miss the point.
34) line 413: I think you need to mention how Virga sniffers detect precip and haze briefly: profile method based on the thresholds...or whatever it is, specify, concisely.
35) section 4.3.1: What is the main message of par 4.3.1? I don't find a clear message from this section, because all results seem to be conditioned by some aspects and don't provide a more general conclusion. I would suggest sharpening the core message without specifying so many details and moving all that is not the core message in the appendix. It is very dispersive for the reader. I understand that you created such algorithms and you are interested in their main differences, but out of those details, what is that we are learning about the processes? What is the statistical info you want modelers to remember when they read the paper to find results to compare their runs with (I am making an example, here, of possible users of your work)
36) fig 9: How does it compare with old Cloudnet? is there a way to show that, instead of only showing Virga sniffer results? I think a good message is also to show the improvement with what is the current state of the art, so the standard Cloudnet categories of drizzle/rain.
37) line 448: it is quite dispersive so it is not easy to remember the difference between the two methods. I would recall here that the developed haze method is based on probability and the Virga sniffer haze echo detection is based on ?? I understood a combination of thresholds, but I am not sure, and add that you want to compare them.
38) line 464: Can you please motivate again this choice? (also commented before) I don't understand why consider haze only when there's a cloud above. I don't understand why you want to exclude the cases that occur without a cloud base, as if they are physically different. In my view, they should not be different, and they should not be excluded.
39) line 476: what do you mean here exactly by rain proportion? Have you checked the rain amounts? or is it a count of rain occurrence? please clarify..
40) line 505: where? refer to the plot, would be nice to follow your words with visual support
41) line 506: I think this is a too strong sentence if you want to base it on results from Acquistapace et al., 2019's case study. In Fig 13b of Acquistapace et al. 2019, it is clearly visible that the distribution of drizzle mature, which is the larger drizzle stage, does not have a sharp peak at -10 dBz, but instead, it shows a tail extending to -40. Therefore, it is not true that clouds with Ze lower than -10 dBZ do not indicate the presence of drizzle. Your values below -10 dBZ completely match the drizzle growth stage, which has a long tail of values of mean Doppler velocities corresponding to falling hydrometeors (fig. 13d).
42) line 541: do you have an example of this situation to discuss with plots?
43) line 589: Why is this section in this paper and not in the Virga sniffer paper? Do we need it if in the end you used the cloud base from Cloudnet (fig 5) and you seem to suggest that the cloud base detection algorithm in Tuononen et al, 2019 is performing better? I am asking for the sake of the readability. In the paper, it is a continuous jump between the virga sniffer and the algorithm you want to present in this paper, and it is very dispersive. ( see comments above)
44) line 589: designed
45) appendix c: Again: are you presenting the virga sniffer tool or the other algorithm? I find it confusing to present an evaluation of an algorithm in your previous paper in the paper in which you introduce a new algorithm. I suggest removing it and publishing it elsewhere. Or, just change the title and structure and introduce this new algorithm as a development of Virga sniffer. It is extremely hard to follow for me, maybe it is my problem.
46) table B1: is this stat relevant to the sea salt discrimination that you have in the title? I think it is off-topic.
Model code and software
Cloudnet haze echoes Johanna Roschke https://doi.org/10.5281/zenodo.10469906
Cloud classification Johanna Roschke https://doi.org/10.5281/zenodo.10471932
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
239 | 45 | 21 | 305 | 12 | 13 |
- HTML: 239
- PDF: 45
- XML: 21
- Total: 305
- BibTeX: 12
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1