the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Characterization of aerosol over the Eastern Mediterranean by polarization sensitive Raman lidar measurements during A-LIFE – aerosol type classification and type separation
Abstract. Aerosols are key players in the Earth’s climate system with mineral dust being one major component of the atmospheric aerosol load. While former campaigns focused on investigating the properties and effects of rather pure mineral dust layers, the A-LIFE (Absorbing aerosol layers in a changing climate: aging, lifetime and dynamics) campaign in April 2017 aimed to characterize dust in complex aerosol mixtures. In this study we present ground-based lidar measurements that were performed at Limassol, Cyprus, in April 2017. During our measurement period, the measurement site was affected by complex mixtures of dust from different sources and pollution aerosols from local sources as well as long-range transported. We found mean values of the particle linear depolarization ratio and extinction-to-backscatter ratio (lidar ratio) of 0.27 ± 0.02 and 41 sr ± 5 sr at 355 nm and of 0.30 ± 0.02 and 39 sr ± 5 sr at 532 nm for Arabian dust, and of 0.27 ± 0.02 and 55 sr ± 8 sr at 355 nm and of 0.28 ± 0.02 and 53 sr ± 7 sr at 532 nm for Saharan dust. The values found for pollution aerosols of the particle linear depolarization ratio and the lidar ratio are 0.05 ± 0.02 at 355 nm and 0.04 ± 0.02 at 532 nm, and 65 sr ± 12 sr at 355 nm and 60 sr ± 16 sr at 532 nm, respectively. We use our measurements for aerosol typing and compare that to aerosol typing from sun photometer data, in-situ measurements and trajectory analysis. The different methods agree well for the derived aerosol type, but looking at the derived dust mass concentration from different methods, the trajectory analysis frequently underestimate high dust concentration that were found in major mineral dust events.
- Preprint
(1717 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-140', Anonymous Referee #1, 01 Feb 2024
General comments
The paper is a nice addition to the literature of lidar measurements of lidar ratio and particle linear depolarization ratio of different aerosol types, with good support from comparisons with other methodologies. Interesting findings include support for a clear separation between the lidar ratios of Arabian and Saharan dust, and at least one case where the in situ suite suggested a pollution component in a dust mixture where the lidar could only detect dust. A section with dust mass and volume concentrations derived from a chain of assumptions is less interesting partly because a lack of error characterization makes it difficult to know how quantiatively useful these estimates are, yet even the qualitative comparison revealed a possible weakness with models' dust transport making another useful conclusion.
While the manuscript is generally good, there are sections that seem frustratingly like a first draft. These include statements that are unsupported or vaguely approximate where precision is called for, important data presented in tables or figures without much discussion or analysis in the text, and figures with inconsistencies in presentation that make them difficult to relate to each other or with the text. Also, in many cases it is difficult to find a description of which instruments were used to derive specific data items in the figures and tables. (In most cases, I think I can infer which instruments, but can't find text to confirm my impression.) There are probably also a least a few actual errors in numbers in the text and figures, so these should all be checked thoroughly. I think it's important to improve these points and not ignore them, but I think all my suggestions should really be quite straightforward and easy for the authors to improve.
Some important specific points:How were the lidar values that are quoted in the abstract and conclusion derived? From single measurements, or from averages over the profiles of the selected case studies, or aggregates over every point in the mission identified as a specific type? Does it combine POLIS and POLLY-XT measurements or refer only to POLIS? This should be stated clearly in whichever section is most relevant to the calculation and again in the conclusion.
Error bars and quoted uncertainties throughout refer only to systematic errors. It's excellent that the team has characterized the systematic uncertainties so carefully, since that is sometimes neglected. However, this methodology also mentions significant averaging, so there is random error that's not represented. There is also significant natural variability and it's important to have an estimate of the variability, to better support conclusions about whether different studies observe comparable values. It seems like multiple measurements are used to derive the quoted values of lidar ratio and PLDR (i.e. in the abstract and conclusion), so a standard deviation would be easy to calculate. (Measurement random error would probably have to come from a propagation of errors that the authors may not be prepared to do, but if it is available, this should be reported as well).
At line 125, it's stated that the manuscript will "intercompare the measurements and resulting classification" (between POLLY_XT and POLIS). This intercomparison was not done. While both sets of measurements appear in Table 1, there's no discussion of them. Table 2 does not show classifications for the two lidar separately and although the separate classifications appear in the scatter plot of Figure 9, there's no way for a reader to identify simultaneous measurements and again no analysis or discussion. Indeed it is also difficult to tell from the text whether both or just one of the instruments' results were used in various places, such as the summary values quoted in the abstract, and for the lidar classification in Table 2.
I would like to know more about the case where in situ typing indicated a pollution component in an airmass that the lidar could not possibly type as anything other than pure dust. This was called out in the conclusions and I agree it's an interesting finding, so it deserves more attention. Specifically what chemical or other measurements in the in situ suite indicated pollution? Is there an estimate of the mass fraction or any other measurement that would support the statement that it's a minor component (as stated in the text) or alternately "moderate" (as stated in the table) (And by the way, that also needs to be made more consistent). Would this amount of pollution be expected to impact either the radiative properties or the aerosol-cloud interactions? It might be useful to highlight this case, and/or other "difficult" cases as case studies, besides the pure type cases highlighted earlier.
Specific points by location in the manuscript:Abstract: Please state in the abstract that the quoted error bars represent systematic error and not variability.
(Introduction)
Line 42 or wherever seems best to the authors, please discuss whether the analysis in this study assumes the mixing is external mixing rather than internal, and any implications.
Line 47. The argument for the need for typing is vague and somewhat unconvincing. It could probably be improved by being more specific. E.g. specifically how would radiation calculations use typing information and what quantitative properties could be estimated better by knowing the type of particles?
(Methodology)
Line 112. "and for the daytime measurements". Please expand on this to explain how the lidar ratio is used for the daytime measurements. The information is in the Discussion section on page 19, but the Methodology is where I think most readers would look for it first.
Section 2.4 and 2.5. There are eight or so different assumed values in these two sections to make the leap from the lidar-derived particulate depolarization ratio and backscatter to dust fraction and volume and mass concentrations. Clearly this must lead to a lot of uncertainty in the output. What is the estimated uncertainty on the assumed parameters and on the calculated outputs?
Line 156. The procedure for classification with AERONET measurements should be discussed in the methodology section (currently part of 3.5.1)
Line 165. How are the two different sources of dust distinguished using in situ data?
Line 169 and Section 2.8. It's confusing that the "in situ" classification is stated to use FLEXPART, and then there's also a separate FLEXPART classification. If FLEXPART is really used as an integral part of what's called in situ classification, then this needs more explanation so that readers can understand what is really based on measurements and what's based on a model. If it's only to separate Saharan and Arabian dust then it would be better to have the in situ classification in table 2 only indicate "dust" and leave the FLEXPART classification as a completely separate entry.
L183 "an aerosol type". The FLEXPART aerosol labels look like they are probably components that can appear simultaneously, rather than a single aerosol type per one-minute section. Please clarify. That is, in Table 2, where groups are listed, such as "SD, OM, SO4, SS", is the reader to understand that these are multiple components found together? Consider changing the wording to say something like "were assigned aerosol components" instead of "an aerosol type".
(Results)Line 193. Please specify which satellite measurements.
Figure 1. It is difficult to pick out data for specific dates mentioned in the text, exacerbated by having 6 minor ticks for each 5 days, a difficult interpolation to do by eye. Can the axis please be reformed to be similar to Figure 2 where each minor tick represents one day?
Figure 1 and Figure 2. It would also be very helpful to annotate the x-axis on both figures with lines to indicate the major events and case studies that are discussed in the text.
Line 210 "Similar to the Arabian dust events ... almost wavelength independent". I don't see that the Arabian dust cases can be described this way, especially the first. Please reword.
How does the AOD summed from the lidar extinction compare to the AERONET AOD? This could help verify that the AERONET is not impacted by clouds, highlight any potential issues from day night differences, and potentially help support the subsequent discussion about the mass concentration differences between the lidar calculations and AERONET.
Line 232 What's the justification for calling these cases "pure"? The in situ classification in the table does not seem to support calling them all pure.
Figure 3. While the backtrajectories show the direction the airmasses came from, they do not seem to go far enough back to show the sources. Both the pollution and Saharan dust tracks are just as dispersed in altitude at the earliest time shown on the track as at the measurement location. Wouldn't we expect to see tracks converging in altitude to the injection height at the source? Later, at line 280, it says "(Figure 3) indicated the western Saharan region" but that's not actually true, because the trajectories don't go far enough back to reach the western part of the Sahara.
Line 244. Again, which satellites?
Figure 2 shows two different Angstrom exponents. The text frequently refers to the Angstrom exponent, but doesn't say which one.
Line 248. How were the PLDR values 0.24 +- 0.02 and 0.27 +- 0.01 arrived at? They really do not seem to be averages of the values displayed in Figure 4 between 0.5 and 2.0 km altitude, as implied. Some subset near the top of that range may average to 0.27 at 532, but there doesn't seem to be any single value above 0.23 at 355, so a subsetted range can't really explain the discrepancy there. Are these correct?
Line 273. Similarly I can't understand how the values at 355 and 532 differ by 0.01 when the profiles in Figure 5 seem to completely overlap. Is it because the 532 nm profile goes up a bin or two higher in altitude than 355 where the values are largest? It also seems strange that it wouldn't be acknowledged in the text how similar these two lines are. Is there any error here?
Figure 6 has a clear error in how the PLDR error bars are plotted. Both green and blue error bars are plotted around the green line with none around the blue line. It's also remarkable how much less vertical variability the green line has. Is there any difference in the smoothing/resolution between these?
The quantities quoted in the case studies seem to be somewhat sloppy estimates. For instance, the 380-500 Angstrom exponents on April 5 on Figure 2 cover the range from about 0.8 to 1.3 and are described as "characterized by low Angstrom exponent of about 0.5" (line 246). Even if its the long wavelength Angstrom exponent that's meant, 0.5 seems to be just a bit lower than any of the values shown on that day. The 355 nm lidar ratio is described as "wavelength independent" (line 250) but it is not specified that this applies only to POLIS and not to POLLY_XT.
Line 267 describes the AERONET AOD as "about 0.15 at 500" and Angstrom of "about 1.5" but the AOD never gets to 0.15 on that day and seems to vary around about 0.12, and 1.5 is at the high end of the variability of the Angstrom exponent. Instead of reading "about" this or that, I would rather see a quantitative average and standard deviation over each day (or whatever period is desired) for a more accurate discussion. Same for the discussion of the lidar values which are also described as ">" or "around" (Line 271) instead of quantitatively.
Line 286 "Those values have been reported recently". That is an odd way to phrase this unless the same exact values were observed in another study. I assume the authors mean that the current values are similar or consistent with previous studies. Please specify the values from the other studies so readers can see this for themselves. This applies in other parts of the manuscript also. Please always explicitly quote the values that are being compared.
Line 296. I don't fully understand how extinction can be retrieved in the daytime but lidar ratio can't. My understanding is that the extinction retrieval itself is quite difficult if the signal is noisy, but I believe the backscatter retrieval is not as sensitive (is that correct?) so it would seem that if conditions are sufficient to retrieve extinction, lidar ratio would not be much harder. I understand this is probably explained fully in prior papers by the research group, but a summary of the explanation for this in the methodology section would be appreciated.
Line 298. It's not really the measurements' signal-to-noise ratio that prevents the retrieval of PLDR at high altitudes, is it? That is, the measurements probably have sufficient SNR to retrieve a reasonably precise volume depolarization ratio, but the low values of the scattering ratio prevent converting it to PLDR with reliable accuracy becasue of the singularity in the conversion equation.
The switch from time (in Figures 1 and 2) to overpass number (in Figures 7) is not very user-friendly. Would it help to plot Figure 7 against time like Figs 1 and 2 (even though the symbols will not be linearly spaced)? Even if not, please consider annotating the x axis with lines indicating the dates of the major aerosol events identified in the text of the paper to help orient the reader to the new x-axis. A callout to the appendix in the figure caption might also help slightly.
The lowest panel of Figure 7 is very busy and hard to read with so many different symbols, especially with many missing data points. Maybe consider moving the dust and non-dust extinction up to the PLDR plot on a second y axis. That would also have the benefit of making it easier to compare dust fraction to PLDR. Indeed it might be more informative to show it as dust fraction rather than as separate dust and non-dust components. In fact, the PLDR and dust fraction data could very well be a separate Figure 8, since they have nothing to do with the in situ comparison, as far as I can see, and separating them would make them easier to understand.
Indeed, could there be an intercomparison of dust fraction with in situ? In situ has a particle size distribution and coarse mode fraction, so is there any comparison available with some version of the dust fraction derived from PLDR?
Line 303. "In the lowermost layers, the values range..." Why is there no analysis or discussion of the agreement in these more important layers? There is sometimes disagreement beyond the error bars. This is probably not of much concern, but since there was a stated intent to compare the two, a comparison should be discussed both when there is agreement and when there isn't.
Line 304. "The lidar ratio ... helps to distinguish". This is either a non-sequitor or a typo since the sentences before this talk about extinction, and after it talk about PLDR.
Line 306. Include some discussion of what happens when the PLDR is 0.32 while 0.3 was assumed as the value for pure dust. It looks like maybe no dust contribution is reported in that case (such as overpass #30)
Figure 7 caption. Which instrument are the dust and non-dust contributions from?
Figure 7. I believe the dust and non-dust contributions are derived via the PLDR and backscatter from the POLIS instrument. Why don't the dust and non-dust extinctions add up to the total 532 nm POLIS extinction? For instance, the first two low-altitude overpasses (#1 and #3) have very similar non-dust extinction (gray) and very different dust extinction (orange), but very similar total extinction (green) (and in fact the one with larger dust component has smaller total).
Figure 7. How does overpass #32 have a larger dust extinction than total 532 nm extinction and how should this be interpreted?
Lines 329-335. In the section describing how aerosol types were derived from AERONET, it's somewhat difficult to understand which parts are quoting observations of the Toledano et al. references and which are talking about the methodology for typing in this particular study. This confusion is exacerbated by the use of "about" preceeding almost every number in the paragraph. This makes the numbers sound very vague whereas this paragraph needs to describe what discrete thresholds were actually used in the study to set the AERONET types in Table 2. I hope the paragraph can be revised to be more clear and quantitative.
Figure 8. It would also be good if the thresholds for distinguishing types were shown as lines on the left panel.
Figure 8. It would probably help the reader if the points in the scatter plot were color-coded in some way that would help pick out specific events. Perhaps color code all of the points by date, or maybe color code only the points that correspond to the key aerosol episodes that are highlighted in this study (the Arabian and Saharan dust events and the pure pollution events).
Section 3.5.2. Strangely, it's really not clear how the lidar aerosol type classifications are done, despite it being the main theme of the paper. I think but am not completely sure, that they were done using only the lidar ratio and PLDR, (and not relying on backtrajectories or other information, because that would make it impossible to use those for verification). Is that correct? If so, what are the thresholds? (Or if the method is more complex than simple thresholds, describe it here). Please also show the thresholds in Figure 9. Which lidar instrument is used for the classifications in Figure 9 and Table 2? It seems like perhaps both are included in Figure 9, but the table has only one column for "lidar".
Table 1. Extinction should be included in the table, since it's also shown in Figures and discussed in the text.
Table 1 caption. "For this evaluation" What exactly does that refer to? (I don't think it can mean the whole manuscript, since POLLY-XT does appear in figures.)
Figure 9. The blue (polluted marine) point at 60 sr looks rather odd. It seems like it is between points labeled pollution on both the lidar ratio and depolarization ratio axes. If this is real, I would like to read an explanation or discussion of this point (If the data are being classified by the two lidar quantities using thresholds, without using other non-lidar data, I don't see how this point and the black pollution point at about 0.12 PLDR could both be as they are shown. Maybe one is an error, or maybe I misunderstand the method. Either way requires a revision, even if it's just to make the explanation of the method more explicit.)
(Discussion)Line 400. "verified by the evaluation of air mass source regions". If the back-trajectories are used in both the "in situ" classification and the "lidar" classification, doesn't that complicate the ability to usefully compare them to each other?
Line 410-411. (repeat from above) I would like to know more about this more difficult case where in situ typing indicated a pollution component in an airmass that the lidar could not possibly type as anything other than pure dust. Specifically what chemical or other measurements in the in situ suite indicated pollution? Is there an estimate of the mass fraction or any other measurement that would support the statement that it's a minor component (as stated in the text) or alternately "moderate" (as stated in the table) (And by the way, that also needs to be made more consistent). Would this amount of pollution be expected to impact either the radiative properties or the aerosol-cloud interactions? This could be an important conclusion, that depolarization indicating pure dust could still include measureable amounts of pollution, especially if it would be expected to have relevant impacts. It might be useful to highlight this case, and/or other "difficult" cases as case studies, besides the pure type cases highlighted earlier.
Table 2. AERONET classifications include a surprising number of similar sounding categories, "Dust and pollution", "mixed dust", "polluted mixture". What is the difference between these, both qualitatively and quantitatively?
Table 2. AERONET 11 April. I'm surprised by "mixed dust". In Fig 2, this date has some of the lowest AE. Wouldn't that make it unambiguously pollution based on how the AERONET classification was described? (As I've said already, I don't find the desription of the method very clear, so I could very well be wrong, but would like to understand better).
Table 2. Lidar column. Which lidar's classification is this?
Table 2. Lidar column. "Mixed pollution" (11 April and 25 April) is not a category in Fig 9. What category is this meant to correspond to? Please make the category labels consistent across all figures and tables.
Table 2. "mixed pollution" 25 April. Why would this be "mixed pollution" rather than just pollution, where it seems to fall on Figure 9? (if the values are 0.03 PLDR /60 sr at 355 nm, as I think they are)?
Table 2. FLEXPART 25 April. What is the meaning of "SS(dust)"?
Line 450. "while FLEXPART estimates only about 30 ug/m3". What about dust mass fraction or total mass? That is, does FLEXPART agree better with the total mass, but disagree that the mass is mostly dust, or have a similar fraction but disagree with the total (or neither)?
Figure 10. What is the interpretation if the volume fraction is bigger than bsc fraction, or the reverse, or when there is a large difference? More specifically, how is it possible to have a case like overpass #10 where the volume fraction is apparently zero but the backscatter fraction is near 60%?
(Summary and Conclusion)
Line 464-478 The summary and conclusion section has a confusing flow (especially the first long paragraph). It would be good to reorganize it to group similar thoughts together. It might be useful to have a larger number of distinct paragraphs.
Line 465 "compared to the lidar ratio found for Saharan dust". This would preferably be followed immediately by giving the the lidar ratios for Saharan dust.
Line 467-469. Please put the mean values of the PLDR of the measured Arabian dust immediately after mentioning it (at the beginning of the sentence). As it is, it reads as if the quoted values are the values "for Saharan mineral dust close to the source region".
Line 469. There also seems to be a typo, since the numbers for 355 and 532 are the same here, but different in the abstract.
Line 470 and 471. "their PLDR of" and "their lidar ratio". This sentence is also constructed ambiguously, with the grammar suggesting that these refer to pollution, or perhaps to the Saharan AND Arabian layers. Only by cross-referencing with the abstract can I see these are the values for the Saharan observation.
Line 471-473 and Line 467. These sentences about agreement with previous studies should be placed together or combined. These two very similar ideas placed far apart is part of what makes the flow hard to follow.
Line 474 "0.05" at 532 nm. The abstract says 0.04.
Line 481-482. "The derived volume fraction of the dust... showed a lower contribution to the total volume compared to its contribution to the optical properties". I don't understand this conclusion. Perhaps spelling out what optical properties would help, but I think it is probably referring to the lidar-derived extinction and PLDR. Since the derived volume fraction is assumed from a simple formula applied to the optical properties, is this just saying that the relationship is a curve rather than proportional? (not a new discovery).
Line 487. "Although it could predict the dust transport in general". Was this discussed in the Discussion section?
Line 486-488. "Models generally assume". This is the first appearance of this idea; it should be in the Discussion section.
Line 491-492. "in-situ derived total mass concentration exceeded". In the Discussion a partial explanation of this was given, that it reflects different sampling resolutions. It would be good to repeat that in the conclusion.
Minor points:Line 40 replace "process" with "processes"
Line 41 replace the comma between optical and microphysical with "and"
Line 205 "indicated by the greenish to reddish colors": please indicate which panel is being referred to.
There should be a callout to Fig 2, probably somewhere between lines 200 and 210
Line 217 replace "neglectable" with "negligible"
Line 298. Probably intends too small rather than too large.
Line 304. Please split this paragraph into two or more, since it covers a number of different topics. I suggest splitting at line 304 after "campaign".
Line 309. Consider including the reference to the Tesche et al. paper(s) that give the methodology for deriving the dust contribution here again.
Figure 9 color legend. Please make the dots in the legend bigger. It is very difficult to distinguish the colors. The text in the legend is a bit on the small side too.
Table 2 and Table 1. Include overpass number as one of the columns, to make it easier to compare with Figure 7.
Table 2 caption, describe why some rows have no lidar or FLEXPART classifications.
Table 2 and Figure 9 (possibly elsewhere too). "Dust mixture" in Fig 9 is "Dust mixture (marine)" in Table 2. The position on Fig 9 does seem to suggest this is meant as exclusively a marine + dust mixture. If so, consider changing the name to reflect the more specific meaning.
Figure 10. blue "BSC" dot missing at overpass #30. Why?
Line 477. "the different classification schemes". I suggest spelling out what these are, to help the summary section stand by itself.
Citation: https://doi.org/10.5194/egusphere-2024-140-RC1 -
AC2: 'Reply on RC1', Silke Gross, 05 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-140/egusphere-2024-140-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Silke Gross, 05 Aug 2024
-
RC2: 'Comment on egusphere-2024-140', Anonymous Referee #2, 18 Feb 2024
general comments
The manuscript entitled „Characterization of aerosol over the Eastern Mediterranean by polarization sensitive Raman lidar measurements during A-LIFE – aerosol type classification and type separation“ by Groß et al. promises an important contribution in the field of aerosol typing from different measurement techniques. Nevertheless, I feel slightly disappointed after reading the manuscript for two main reasons.
- It seems like parts of the manuscript have been written by different people. Some sections are well written, others are quite sloppy in usage of language and presentation of figures.
- The abstract describes the goal of the A-LIFE campaign to “characterize dust in complex mixtures”. But a detailed discussion of aerosol mixtures is missing in the paper. Instead the case studies focus -again - on situations with pure aerosol types and the discussion of derived aerosol types and dust fractions is not comprehensive enough.
Some important points for improvement
Section 2.2:
It is unclear how the extinction coefficients from day-time measurements (presented in Fig. 7) were derived. Please, elaborate in more detail. Furthermore, concerning the retrieved uncertainties, it is insufficient to just provide a reference to previous work. Especially since the captions of Figures 4-6 claim that all error bars only show systematic uncertainties, however the references clearly describe the handling of statistical errors as well. Please, add some sentences like “systematic errors of lidar ratio include the uncertainties of backscatter calibration, …, ”
Sections 2.3 – 2.5:
The reader may get the impression that data of both lidar systems have been analyzed with the same software tools, and/or by the same experts. Please, elaborate.
Section 2.6:
Please, provide technical details (e.g. thresholds) of the aerosol typing here, instead of in section 3.5.1.
Section 2.7:
Since the publication by Weinzierl et al. is not yet published, the reader needs more details here, not just a reference. Such details should entail: what exactly are the derived aerosol type “mixtures?” What components do those mixtures contain? It is also entirely unclear, what a (moderately) polluted mixture with or without coarse mode might be. Please, provide descriptive examples like “sea salt mixed with smoke”. To that point, it is unclear whether the in-situ classification scheme depends on FLEXPART input or not. If it is not independent, why provide the comparison in Table 2?
Section 2.8:
Is it possible to provide thresholds for the aerosol typing here?
Section 2.9:
If the highly sophisticated FLEXPART simulations were available for the campaign, why is it necessary to work with basic HYSPLIT trajectories, too? Please, elaborate why FLEXPART simulations were not used for analysis of the lidar measurements.
Section 3.1:
It is mentioned here that smoke aerosols have been present during the campaign. But section 2.4 does not describe the handling of smoke in the aerosol typing procedures, please elaborate.
Figure 1:
Change minor ticks in a way that each interval corresponds to one day. Indicate Falcon overpass times and altitudes by symbols and times of lidar case studies by vertical lines
Figure 2:
It is difficult to visually separate the different symbols and colors. In addition, grid lines would be appreciated. Further, the time series of AOD at 340 and 1020 nm could be omitted as the wavelength dependency is already illustrated by angstrom exponents.
Section 3.2:
The introduction of this section claims that the presentation of three case studies with pure aerosol conditions is important for later studies of aerosol mixtures. However, section 1 mentions the availability of many previous field campaigns for studying pure aerosol conditions and the importance of the A-LIFE campaign for investigating mixed conditions. It is very important to add at least one extended case study with mixed conditions. The extended presentation of the mixed case should also include profiles of aerosol type and dust fraction as well as data of non-lidar measurements (AERONET, FALCON). In order to keep the manuscript short, case studies of 9 and 21 April are unnecessary.
Figures 4-6:
Adding a label ( a), b), c), d) ) to the different panels would be more reader-friendly. Additional panels with profiles of angstrom exponents and dust fraction need to be added. These will allow for a better comparison with AERONET data and provide a better connection with Table 2. The Quicklook could be as a top row panel above the profile panels.
It is very advanced that the authors present systematic uncertainties of all profiles. Nevertheless, there must be statistical uncertainties, too. As such they need to be included in the plot.
Section 3.4:
Please reconsider whether it is really the altitude /signal-to-noise ratio which limits the retrieval of PLDR. In most cases, it is a too low aerosol amount making the retrieval of PLDR mathematically unstable. Thusly, it depends on atmospheric situation, not measurement setup whether PLDR can be retrieved or not.
Further, it is unclear from which of the two lidar systems the reported values and findings originate. It can be assumed that this data come from POLIS system. Why are no findings from PollyXT reported in this section?
Again, there is the need for further explanation on how the extinction coefficients have been obtained from daytime measurements.
Section 3.5:
The introduction paragraph of this section is only a replication of previous statements.
Section 3.5.1:
The description of the typing algorithm should go to section 2. Thresholds used for the typing should be printed as lines in Figure 8. It would also be helpful for the reader if the individual points are color-coded by type. The highlighted daily mean values have no real benefit because the change between atmospheric situations does not follow calendar days. It would be better to highlight the measurements during FALCON flights which were used for the typing in Table 2.
Section 3.5.2:
The description of aerosol typing from lidar data is in conflict with the description in section 2.4. It rather seems that the typing has been obtained from ancillary data (trajectories) and not from optical properties. How else can it be explained that the new type “Arabian dust” was identified in this study, but was not known in previous studies?
Table 1 and Figure 9 needs a much more detailed discussion. Why are there missing values in Table 1? What are the measurement times? It would also be helpful for the comparison, if aerosol types and dust fractions derived from both lidars would be included in the table and their differences discussed.
Figure 9:
The difference between “dust mixture” and “polluted dust” needs to be explained in this paper. References to previous works is not sufficient here.
Some individual data points require more discussion. There is one “polluted marine” (25 April?) which is clearly located within the pollution cluster and one “dust mixture” (20 April?) between Saharan and Arabian dust clusters. How can those data points be so far outside their clusters if the optical properties have been used for typing?
Section 4.1 and Table 2:
First of all, a translation table explaining all the different terms for aerosol types is dearly missing in this paper. Terms are different from instrument to instrument, but also from section to section (e.g. lidar “mixed pollution” and AERONET “polluted mixture” were not mentioned anywhere before Table 2). Those inaccuracies can be solved by a careful editing of the text.
The quality and usefulness of this paper could significantly be improved by a discussion on how typing from one method (e.g. lidar) could be compared to typing from another method (e.g. in-situ). Obviously, such comparisons are difficult because instruments measure different quantities (optical properties vs. size distribution). Nevertheless, if a direct “translation” is impossible, the authors should at least discuss the difficulties.
In general, the discussion of Table 2 is way too short. This table contains the main findings of the paper (which has “aerosol type classification” in the title) and deserves more attention. Especially since it would be interesting to learn more about the days with complex situations (5,6,11 April as mentioned in the text). Why not present one of those complex cases in the case study section?
Section 4.2:
The introduction of in-situ total mass concentrations as a “referee” after discussing the differences between lidar and FLEXPART retrievals (lines 450-452) seems odd.
It would be helpful to add lidar derived total mass concentrations (not only dust component) to Figure 10 for a better comparison with in-situ total mass concentration. Uncertainties due to omitting the non-dust fraction in mass estimates should at least be discussed.
Would it be possible to estimate a dust fraction from in-situ data? The subclasses “pure”, “moderately-polluted” and “polluted” should at least allow for a rough estimate.
Why are no PollyXT results included in Figure 10? Please elaborate.
Minor points:
L41: add comma after properties
L53: add comma after step. A next step -> the next step
L 179: the sentence has 2 times “obtained”
Figure 3: the 24h symbols are not visible.
L243-244: strange wording to start a section.
L 246: lidar measurement show -> the lidar measurement shows
Figure 7: the lower panel is too busy. It would be better to present the dust/ non-dust extinction coefficients in a separate panel. Again, statistical uncertainties are missing.
Table 2: again, what are statistical errors?
Figure 9: What are the error bars? The symbols in the legend are very small and hard to see. The legend should list all aerosol types. It is confusing to have some of them listed in the legend, others in the caption. The plots would be easier to read without aerosol types which are not used in this study (like volcanic ash). This would allow to plot the existing pollution points in gray, allowing the pollution data points of this study to be plotted in black for better contrast, especially in the left panel. Another option for clarification would be not to show all individual previous data points, but only cluster boundaries as polygon lines.
Citation: https://doi.org/10.5194/egusphere-2024-140-RC2 -
AC1: 'Reply on RC2', Silke Gross, 05 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-140/egusphere-2024-140-AC1-supplement.pdf
Status: closed
-
RC1: 'Comment on egusphere-2024-140', Anonymous Referee #1, 01 Feb 2024
General comments
The paper is a nice addition to the literature of lidar measurements of lidar ratio and particle linear depolarization ratio of different aerosol types, with good support from comparisons with other methodologies. Interesting findings include support for a clear separation between the lidar ratios of Arabian and Saharan dust, and at least one case where the in situ suite suggested a pollution component in a dust mixture where the lidar could only detect dust. A section with dust mass and volume concentrations derived from a chain of assumptions is less interesting partly because a lack of error characterization makes it difficult to know how quantiatively useful these estimates are, yet even the qualitative comparison revealed a possible weakness with models' dust transport making another useful conclusion.
While the manuscript is generally good, there are sections that seem frustratingly like a first draft. These include statements that are unsupported or vaguely approximate where precision is called for, important data presented in tables or figures without much discussion or analysis in the text, and figures with inconsistencies in presentation that make them difficult to relate to each other or with the text. Also, in many cases it is difficult to find a description of which instruments were used to derive specific data items in the figures and tables. (In most cases, I think I can infer which instruments, but can't find text to confirm my impression.) There are probably also a least a few actual errors in numbers in the text and figures, so these should all be checked thoroughly. I think it's important to improve these points and not ignore them, but I think all my suggestions should really be quite straightforward and easy for the authors to improve.
Some important specific points:How were the lidar values that are quoted in the abstract and conclusion derived? From single measurements, or from averages over the profiles of the selected case studies, or aggregates over every point in the mission identified as a specific type? Does it combine POLIS and POLLY-XT measurements or refer only to POLIS? This should be stated clearly in whichever section is most relevant to the calculation and again in the conclusion.
Error bars and quoted uncertainties throughout refer only to systematic errors. It's excellent that the team has characterized the systematic uncertainties so carefully, since that is sometimes neglected. However, this methodology also mentions significant averaging, so there is random error that's not represented. There is also significant natural variability and it's important to have an estimate of the variability, to better support conclusions about whether different studies observe comparable values. It seems like multiple measurements are used to derive the quoted values of lidar ratio and PLDR (i.e. in the abstract and conclusion), so a standard deviation would be easy to calculate. (Measurement random error would probably have to come from a propagation of errors that the authors may not be prepared to do, but if it is available, this should be reported as well).
At line 125, it's stated that the manuscript will "intercompare the measurements and resulting classification" (between POLLY_XT and POLIS). This intercomparison was not done. While both sets of measurements appear in Table 1, there's no discussion of them. Table 2 does not show classifications for the two lidar separately and although the separate classifications appear in the scatter plot of Figure 9, there's no way for a reader to identify simultaneous measurements and again no analysis or discussion. Indeed it is also difficult to tell from the text whether both or just one of the instruments' results were used in various places, such as the summary values quoted in the abstract, and for the lidar classification in Table 2.
I would like to know more about the case where in situ typing indicated a pollution component in an airmass that the lidar could not possibly type as anything other than pure dust. This was called out in the conclusions and I agree it's an interesting finding, so it deserves more attention. Specifically what chemical or other measurements in the in situ suite indicated pollution? Is there an estimate of the mass fraction or any other measurement that would support the statement that it's a minor component (as stated in the text) or alternately "moderate" (as stated in the table) (And by the way, that also needs to be made more consistent). Would this amount of pollution be expected to impact either the radiative properties or the aerosol-cloud interactions? It might be useful to highlight this case, and/or other "difficult" cases as case studies, besides the pure type cases highlighted earlier.
Specific points by location in the manuscript:Abstract: Please state in the abstract that the quoted error bars represent systematic error and not variability.
(Introduction)
Line 42 or wherever seems best to the authors, please discuss whether the analysis in this study assumes the mixing is external mixing rather than internal, and any implications.
Line 47. The argument for the need for typing is vague and somewhat unconvincing. It could probably be improved by being more specific. E.g. specifically how would radiation calculations use typing information and what quantitative properties could be estimated better by knowing the type of particles?
(Methodology)
Line 112. "and for the daytime measurements". Please expand on this to explain how the lidar ratio is used for the daytime measurements. The information is in the Discussion section on page 19, but the Methodology is where I think most readers would look for it first.
Section 2.4 and 2.5. There are eight or so different assumed values in these two sections to make the leap from the lidar-derived particulate depolarization ratio and backscatter to dust fraction and volume and mass concentrations. Clearly this must lead to a lot of uncertainty in the output. What is the estimated uncertainty on the assumed parameters and on the calculated outputs?
Line 156. The procedure for classification with AERONET measurements should be discussed in the methodology section (currently part of 3.5.1)
Line 165. How are the two different sources of dust distinguished using in situ data?
Line 169 and Section 2.8. It's confusing that the "in situ" classification is stated to use FLEXPART, and then there's also a separate FLEXPART classification. If FLEXPART is really used as an integral part of what's called in situ classification, then this needs more explanation so that readers can understand what is really based on measurements and what's based on a model. If it's only to separate Saharan and Arabian dust then it would be better to have the in situ classification in table 2 only indicate "dust" and leave the FLEXPART classification as a completely separate entry.
L183 "an aerosol type". The FLEXPART aerosol labels look like they are probably components that can appear simultaneously, rather than a single aerosol type per one-minute section. Please clarify. That is, in Table 2, where groups are listed, such as "SD, OM, SO4, SS", is the reader to understand that these are multiple components found together? Consider changing the wording to say something like "were assigned aerosol components" instead of "an aerosol type".
(Results)Line 193. Please specify which satellite measurements.
Figure 1. It is difficult to pick out data for specific dates mentioned in the text, exacerbated by having 6 minor ticks for each 5 days, a difficult interpolation to do by eye. Can the axis please be reformed to be similar to Figure 2 where each minor tick represents one day?
Figure 1 and Figure 2. It would also be very helpful to annotate the x-axis on both figures with lines to indicate the major events and case studies that are discussed in the text.
Line 210 "Similar to the Arabian dust events ... almost wavelength independent". I don't see that the Arabian dust cases can be described this way, especially the first. Please reword.
How does the AOD summed from the lidar extinction compare to the AERONET AOD? This could help verify that the AERONET is not impacted by clouds, highlight any potential issues from day night differences, and potentially help support the subsequent discussion about the mass concentration differences between the lidar calculations and AERONET.
Line 232 What's the justification for calling these cases "pure"? The in situ classification in the table does not seem to support calling them all pure.
Figure 3. While the backtrajectories show the direction the airmasses came from, they do not seem to go far enough back to show the sources. Both the pollution and Saharan dust tracks are just as dispersed in altitude at the earliest time shown on the track as at the measurement location. Wouldn't we expect to see tracks converging in altitude to the injection height at the source? Later, at line 280, it says "(Figure 3) indicated the western Saharan region" but that's not actually true, because the trajectories don't go far enough back to reach the western part of the Sahara.
Line 244. Again, which satellites?
Figure 2 shows two different Angstrom exponents. The text frequently refers to the Angstrom exponent, but doesn't say which one.
Line 248. How were the PLDR values 0.24 +- 0.02 and 0.27 +- 0.01 arrived at? They really do not seem to be averages of the values displayed in Figure 4 between 0.5 and 2.0 km altitude, as implied. Some subset near the top of that range may average to 0.27 at 532, but there doesn't seem to be any single value above 0.23 at 355, so a subsetted range can't really explain the discrepancy there. Are these correct?
Line 273. Similarly I can't understand how the values at 355 and 532 differ by 0.01 when the profiles in Figure 5 seem to completely overlap. Is it because the 532 nm profile goes up a bin or two higher in altitude than 355 where the values are largest? It also seems strange that it wouldn't be acknowledged in the text how similar these two lines are. Is there any error here?
Figure 6 has a clear error in how the PLDR error bars are plotted. Both green and blue error bars are plotted around the green line with none around the blue line. It's also remarkable how much less vertical variability the green line has. Is there any difference in the smoothing/resolution between these?
The quantities quoted in the case studies seem to be somewhat sloppy estimates. For instance, the 380-500 Angstrom exponents on April 5 on Figure 2 cover the range from about 0.8 to 1.3 and are described as "characterized by low Angstrom exponent of about 0.5" (line 246). Even if its the long wavelength Angstrom exponent that's meant, 0.5 seems to be just a bit lower than any of the values shown on that day. The 355 nm lidar ratio is described as "wavelength independent" (line 250) but it is not specified that this applies only to POLIS and not to POLLY_XT.
Line 267 describes the AERONET AOD as "about 0.15 at 500" and Angstrom of "about 1.5" but the AOD never gets to 0.15 on that day and seems to vary around about 0.12, and 1.5 is at the high end of the variability of the Angstrom exponent. Instead of reading "about" this or that, I would rather see a quantitative average and standard deviation over each day (or whatever period is desired) for a more accurate discussion. Same for the discussion of the lidar values which are also described as ">" or "around" (Line 271) instead of quantitatively.
Line 286 "Those values have been reported recently". That is an odd way to phrase this unless the same exact values were observed in another study. I assume the authors mean that the current values are similar or consistent with previous studies. Please specify the values from the other studies so readers can see this for themselves. This applies in other parts of the manuscript also. Please always explicitly quote the values that are being compared.
Line 296. I don't fully understand how extinction can be retrieved in the daytime but lidar ratio can't. My understanding is that the extinction retrieval itself is quite difficult if the signal is noisy, but I believe the backscatter retrieval is not as sensitive (is that correct?) so it would seem that if conditions are sufficient to retrieve extinction, lidar ratio would not be much harder. I understand this is probably explained fully in prior papers by the research group, but a summary of the explanation for this in the methodology section would be appreciated.
Line 298. It's not really the measurements' signal-to-noise ratio that prevents the retrieval of PLDR at high altitudes, is it? That is, the measurements probably have sufficient SNR to retrieve a reasonably precise volume depolarization ratio, but the low values of the scattering ratio prevent converting it to PLDR with reliable accuracy becasue of the singularity in the conversion equation.
The switch from time (in Figures 1 and 2) to overpass number (in Figures 7) is not very user-friendly. Would it help to plot Figure 7 against time like Figs 1 and 2 (even though the symbols will not be linearly spaced)? Even if not, please consider annotating the x axis with lines indicating the dates of the major aerosol events identified in the text of the paper to help orient the reader to the new x-axis. A callout to the appendix in the figure caption might also help slightly.
The lowest panel of Figure 7 is very busy and hard to read with so many different symbols, especially with many missing data points. Maybe consider moving the dust and non-dust extinction up to the PLDR plot on a second y axis. That would also have the benefit of making it easier to compare dust fraction to PLDR. Indeed it might be more informative to show it as dust fraction rather than as separate dust and non-dust components. In fact, the PLDR and dust fraction data could very well be a separate Figure 8, since they have nothing to do with the in situ comparison, as far as I can see, and separating them would make them easier to understand.
Indeed, could there be an intercomparison of dust fraction with in situ? In situ has a particle size distribution and coarse mode fraction, so is there any comparison available with some version of the dust fraction derived from PLDR?
Line 303. "In the lowermost layers, the values range..." Why is there no analysis or discussion of the agreement in these more important layers? There is sometimes disagreement beyond the error bars. This is probably not of much concern, but since there was a stated intent to compare the two, a comparison should be discussed both when there is agreement and when there isn't.
Line 304. "The lidar ratio ... helps to distinguish". This is either a non-sequitor or a typo since the sentences before this talk about extinction, and after it talk about PLDR.
Line 306. Include some discussion of what happens when the PLDR is 0.32 while 0.3 was assumed as the value for pure dust. It looks like maybe no dust contribution is reported in that case (such as overpass #30)
Figure 7 caption. Which instrument are the dust and non-dust contributions from?
Figure 7. I believe the dust and non-dust contributions are derived via the PLDR and backscatter from the POLIS instrument. Why don't the dust and non-dust extinctions add up to the total 532 nm POLIS extinction? For instance, the first two low-altitude overpasses (#1 and #3) have very similar non-dust extinction (gray) and very different dust extinction (orange), but very similar total extinction (green) (and in fact the one with larger dust component has smaller total).
Figure 7. How does overpass #32 have a larger dust extinction than total 532 nm extinction and how should this be interpreted?
Lines 329-335. In the section describing how aerosol types were derived from AERONET, it's somewhat difficult to understand which parts are quoting observations of the Toledano et al. references and which are talking about the methodology for typing in this particular study. This confusion is exacerbated by the use of "about" preceeding almost every number in the paragraph. This makes the numbers sound very vague whereas this paragraph needs to describe what discrete thresholds were actually used in the study to set the AERONET types in Table 2. I hope the paragraph can be revised to be more clear and quantitative.
Figure 8. It would also be good if the thresholds for distinguishing types were shown as lines on the left panel.
Figure 8. It would probably help the reader if the points in the scatter plot were color-coded in some way that would help pick out specific events. Perhaps color code all of the points by date, or maybe color code only the points that correspond to the key aerosol episodes that are highlighted in this study (the Arabian and Saharan dust events and the pure pollution events).
Section 3.5.2. Strangely, it's really not clear how the lidar aerosol type classifications are done, despite it being the main theme of the paper. I think but am not completely sure, that they were done using only the lidar ratio and PLDR, (and not relying on backtrajectories or other information, because that would make it impossible to use those for verification). Is that correct? If so, what are the thresholds? (Or if the method is more complex than simple thresholds, describe it here). Please also show the thresholds in Figure 9. Which lidar instrument is used for the classifications in Figure 9 and Table 2? It seems like perhaps both are included in Figure 9, but the table has only one column for "lidar".
Table 1. Extinction should be included in the table, since it's also shown in Figures and discussed in the text.
Table 1 caption. "For this evaluation" What exactly does that refer to? (I don't think it can mean the whole manuscript, since POLLY-XT does appear in figures.)
Figure 9. The blue (polluted marine) point at 60 sr looks rather odd. It seems like it is between points labeled pollution on both the lidar ratio and depolarization ratio axes. If this is real, I would like to read an explanation or discussion of this point (If the data are being classified by the two lidar quantities using thresholds, without using other non-lidar data, I don't see how this point and the black pollution point at about 0.12 PLDR could both be as they are shown. Maybe one is an error, or maybe I misunderstand the method. Either way requires a revision, even if it's just to make the explanation of the method more explicit.)
(Discussion)Line 400. "verified by the evaluation of air mass source regions". If the back-trajectories are used in both the "in situ" classification and the "lidar" classification, doesn't that complicate the ability to usefully compare them to each other?
Line 410-411. (repeat from above) I would like to know more about this more difficult case where in situ typing indicated a pollution component in an airmass that the lidar could not possibly type as anything other than pure dust. Specifically what chemical or other measurements in the in situ suite indicated pollution? Is there an estimate of the mass fraction or any other measurement that would support the statement that it's a minor component (as stated in the text) or alternately "moderate" (as stated in the table) (And by the way, that also needs to be made more consistent). Would this amount of pollution be expected to impact either the radiative properties or the aerosol-cloud interactions? This could be an important conclusion, that depolarization indicating pure dust could still include measureable amounts of pollution, especially if it would be expected to have relevant impacts. It might be useful to highlight this case, and/or other "difficult" cases as case studies, besides the pure type cases highlighted earlier.
Table 2. AERONET classifications include a surprising number of similar sounding categories, "Dust and pollution", "mixed dust", "polluted mixture". What is the difference between these, both qualitatively and quantitatively?
Table 2. AERONET 11 April. I'm surprised by "mixed dust". In Fig 2, this date has some of the lowest AE. Wouldn't that make it unambiguously pollution based on how the AERONET classification was described? (As I've said already, I don't find the desription of the method very clear, so I could very well be wrong, but would like to understand better).
Table 2. Lidar column. Which lidar's classification is this?
Table 2. Lidar column. "Mixed pollution" (11 April and 25 April) is not a category in Fig 9. What category is this meant to correspond to? Please make the category labels consistent across all figures and tables.
Table 2. "mixed pollution" 25 April. Why would this be "mixed pollution" rather than just pollution, where it seems to fall on Figure 9? (if the values are 0.03 PLDR /60 sr at 355 nm, as I think they are)?
Table 2. FLEXPART 25 April. What is the meaning of "SS(dust)"?
Line 450. "while FLEXPART estimates only about 30 ug/m3". What about dust mass fraction or total mass? That is, does FLEXPART agree better with the total mass, but disagree that the mass is mostly dust, or have a similar fraction but disagree with the total (or neither)?
Figure 10. What is the interpretation if the volume fraction is bigger than bsc fraction, or the reverse, or when there is a large difference? More specifically, how is it possible to have a case like overpass #10 where the volume fraction is apparently zero but the backscatter fraction is near 60%?
(Summary and Conclusion)
Line 464-478 The summary and conclusion section has a confusing flow (especially the first long paragraph). It would be good to reorganize it to group similar thoughts together. It might be useful to have a larger number of distinct paragraphs.
Line 465 "compared to the lidar ratio found for Saharan dust". This would preferably be followed immediately by giving the the lidar ratios for Saharan dust.
Line 467-469. Please put the mean values of the PLDR of the measured Arabian dust immediately after mentioning it (at the beginning of the sentence). As it is, it reads as if the quoted values are the values "for Saharan mineral dust close to the source region".
Line 469. There also seems to be a typo, since the numbers for 355 and 532 are the same here, but different in the abstract.
Line 470 and 471. "their PLDR of" and "their lidar ratio". This sentence is also constructed ambiguously, with the grammar suggesting that these refer to pollution, or perhaps to the Saharan AND Arabian layers. Only by cross-referencing with the abstract can I see these are the values for the Saharan observation.
Line 471-473 and Line 467. These sentences about agreement with previous studies should be placed together or combined. These two very similar ideas placed far apart is part of what makes the flow hard to follow.
Line 474 "0.05" at 532 nm. The abstract says 0.04.
Line 481-482. "The derived volume fraction of the dust... showed a lower contribution to the total volume compared to its contribution to the optical properties". I don't understand this conclusion. Perhaps spelling out what optical properties would help, but I think it is probably referring to the lidar-derived extinction and PLDR. Since the derived volume fraction is assumed from a simple formula applied to the optical properties, is this just saying that the relationship is a curve rather than proportional? (not a new discovery).
Line 487. "Although it could predict the dust transport in general". Was this discussed in the Discussion section?
Line 486-488. "Models generally assume". This is the first appearance of this idea; it should be in the Discussion section.
Line 491-492. "in-situ derived total mass concentration exceeded". In the Discussion a partial explanation of this was given, that it reflects different sampling resolutions. It would be good to repeat that in the conclusion.
Minor points:Line 40 replace "process" with "processes"
Line 41 replace the comma between optical and microphysical with "and"
Line 205 "indicated by the greenish to reddish colors": please indicate which panel is being referred to.
There should be a callout to Fig 2, probably somewhere between lines 200 and 210
Line 217 replace "neglectable" with "negligible"
Line 298. Probably intends too small rather than too large.
Line 304. Please split this paragraph into two or more, since it covers a number of different topics. I suggest splitting at line 304 after "campaign".
Line 309. Consider including the reference to the Tesche et al. paper(s) that give the methodology for deriving the dust contribution here again.
Figure 9 color legend. Please make the dots in the legend bigger. It is very difficult to distinguish the colors. The text in the legend is a bit on the small side too.
Table 2 and Table 1. Include overpass number as one of the columns, to make it easier to compare with Figure 7.
Table 2 caption, describe why some rows have no lidar or FLEXPART classifications.
Table 2 and Figure 9 (possibly elsewhere too). "Dust mixture" in Fig 9 is "Dust mixture (marine)" in Table 2. The position on Fig 9 does seem to suggest this is meant as exclusively a marine + dust mixture. If so, consider changing the name to reflect the more specific meaning.
Figure 10. blue "BSC" dot missing at overpass #30. Why?
Line 477. "the different classification schemes". I suggest spelling out what these are, to help the summary section stand by itself.
Citation: https://doi.org/10.5194/egusphere-2024-140-RC1 -
AC2: 'Reply on RC1', Silke Gross, 05 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-140/egusphere-2024-140-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Silke Gross, 05 Aug 2024
-
RC2: 'Comment on egusphere-2024-140', Anonymous Referee #2, 18 Feb 2024
general comments
The manuscript entitled „Characterization of aerosol over the Eastern Mediterranean by polarization sensitive Raman lidar measurements during A-LIFE – aerosol type classification and type separation“ by Groß et al. promises an important contribution in the field of aerosol typing from different measurement techniques. Nevertheless, I feel slightly disappointed after reading the manuscript for two main reasons.
- It seems like parts of the manuscript have been written by different people. Some sections are well written, others are quite sloppy in usage of language and presentation of figures.
- The abstract describes the goal of the A-LIFE campaign to “characterize dust in complex mixtures”. But a detailed discussion of aerosol mixtures is missing in the paper. Instead the case studies focus -again - on situations with pure aerosol types and the discussion of derived aerosol types and dust fractions is not comprehensive enough.
Some important points for improvement
Section 2.2:
It is unclear how the extinction coefficients from day-time measurements (presented in Fig. 7) were derived. Please, elaborate in more detail. Furthermore, concerning the retrieved uncertainties, it is insufficient to just provide a reference to previous work. Especially since the captions of Figures 4-6 claim that all error bars only show systematic uncertainties, however the references clearly describe the handling of statistical errors as well. Please, add some sentences like “systematic errors of lidar ratio include the uncertainties of backscatter calibration, …, ”
Sections 2.3 – 2.5:
The reader may get the impression that data of both lidar systems have been analyzed with the same software tools, and/or by the same experts. Please, elaborate.
Section 2.6:
Please, provide technical details (e.g. thresholds) of the aerosol typing here, instead of in section 3.5.1.
Section 2.7:
Since the publication by Weinzierl et al. is not yet published, the reader needs more details here, not just a reference. Such details should entail: what exactly are the derived aerosol type “mixtures?” What components do those mixtures contain? It is also entirely unclear, what a (moderately) polluted mixture with or without coarse mode might be. Please, provide descriptive examples like “sea salt mixed with smoke”. To that point, it is unclear whether the in-situ classification scheme depends on FLEXPART input or not. If it is not independent, why provide the comparison in Table 2?
Section 2.8:
Is it possible to provide thresholds for the aerosol typing here?
Section 2.9:
If the highly sophisticated FLEXPART simulations were available for the campaign, why is it necessary to work with basic HYSPLIT trajectories, too? Please, elaborate why FLEXPART simulations were not used for analysis of the lidar measurements.
Section 3.1:
It is mentioned here that smoke aerosols have been present during the campaign. But section 2.4 does not describe the handling of smoke in the aerosol typing procedures, please elaborate.
Figure 1:
Change minor ticks in a way that each interval corresponds to one day. Indicate Falcon overpass times and altitudes by symbols and times of lidar case studies by vertical lines
Figure 2:
It is difficult to visually separate the different symbols and colors. In addition, grid lines would be appreciated. Further, the time series of AOD at 340 and 1020 nm could be omitted as the wavelength dependency is already illustrated by angstrom exponents.
Section 3.2:
The introduction of this section claims that the presentation of three case studies with pure aerosol conditions is important for later studies of aerosol mixtures. However, section 1 mentions the availability of many previous field campaigns for studying pure aerosol conditions and the importance of the A-LIFE campaign for investigating mixed conditions. It is very important to add at least one extended case study with mixed conditions. The extended presentation of the mixed case should also include profiles of aerosol type and dust fraction as well as data of non-lidar measurements (AERONET, FALCON). In order to keep the manuscript short, case studies of 9 and 21 April are unnecessary.
Figures 4-6:
Adding a label ( a), b), c), d) ) to the different panels would be more reader-friendly. Additional panels with profiles of angstrom exponents and dust fraction need to be added. These will allow for a better comparison with AERONET data and provide a better connection with Table 2. The Quicklook could be as a top row panel above the profile panels.
It is very advanced that the authors present systematic uncertainties of all profiles. Nevertheless, there must be statistical uncertainties, too. As such they need to be included in the plot.
Section 3.4:
Please reconsider whether it is really the altitude /signal-to-noise ratio which limits the retrieval of PLDR. In most cases, it is a too low aerosol amount making the retrieval of PLDR mathematically unstable. Thusly, it depends on atmospheric situation, not measurement setup whether PLDR can be retrieved or not.
Further, it is unclear from which of the two lidar systems the reported values and findings originate. It can be assumed that this data come from POLIS system. Why are no findings from PollyXT reported in this section?
Again, there is the need for further explanation on how the extinction coefficients have been obtained from daytime measurements.
Section 3.5:
The introduction paragraph of this section is only a replication of previous statements.
Section 3.5.1:
The description of the typing algorithm should go to section 2. Thresholds used for the typing should be printed as lines in Figure 8. It would also be helpful for the reader if the individual points are color-coded by type. The highlighted daily mean values have no real benefit because the change between atmospheric situations does not follow calendar days. It would be better to highlight the measurements during FALCON flights which were used for the typing in Table 2.
Section 3.5.2:
The description of aerosol typing from lidar data is in conflict with the description in section 2.4. It rather seems that the typing has been obtained from ancillary data (trajectories) and not from optical properties. How else can it be explained that the new type “Arabian dust” was identified in this study, but was not known in previous studies?
Table 1 and Figure 9 needs a much more detailed discussion. Why are there missing values in Table 1? What are the measurement times? It would also be helpful for the comparison, if aerosol types and dust fractions derived from both lidars would be included in the table and their differences discussed.
Figure 9:
The difference between “dust mixture” and “polluted dust” needs to be explained in this paper. References to previous works is not sufficient here.
Some individual data points require more discussion. There is one “polluted marine” (25 April?) which is clearly located within the pollution cluster and one “dust mixture” (20 April?) between Saharan and Arabian dust clusters. How can those data points be so far outside their clusters if the optical properties have been used for typing?
Section 4.1 and Table 2:
First of all, a translation table explaining all the different terms for aerosol types is dearly missing in this paper. Terms are different from instrument to instrument, but also from section to section (e.g. lidar “mixed pollution” and AERONET “polluted mixture” were not mentioned anywhere before Table 2). Those inaccuracies can be solved by a careful editing of the text.
The quality and usefulness of this paper could significantly be improved by a discussion on how typing from one method (e.g. lidar) could be compared to typing from another method (e.g. in-situ). Obviously, such comparisons are difficult because instruments measure different quantities (optical properties vs. size distribution). Nevertheless, if a direct “translation” is impossible, the authors should at least discuss the difficulties.
In general, the discussion of Table 2 is way too short. This table contains the main findings of the paper (which has “aerosol type classification” in the title) and deserves more attention. Especially since it would be interesting to learn more about the days with complex situations (5,6,11 April as mentioned in the text). Why not present one of those complex cases in the case study section?
Section 4.2:
The introduction of in-situ total mass concentrations as a “referee” after discussing the differences between lidar and FLEXPART retrievals (lines 450-452) seems odd.
It would be helpful to add lidar derived total mass concentrations (not only dust component) to Figure 10 for a better comparison with in-situ total mass concentration. Uncertainties due to omitting the non-dust fraction in mass estimates should at least be discussed.
Would it be possible to estimate a dust fraction from in-situ data? The subclasses “pure”, “moderately-polluted” and “polluted” should at least allow for a rough estimate.
Why are no PollyXT results included in Figure 10? Please elaborate.
Minor points:
L41: add comma after properties
L53: add comma after step. A next step -> the next step
L 179: the sentence has 2 times “obtained”
Figure 3: the 24h symbols are not visible.
L243-244: strange wording to start a section.
L 246: lidar measurement show -> the lidar measurement shows
Figure 7: the lower panel is too busy. It would be better to present the dust/ non-dust extinction coefficients in a separate panel. Again, statistical uncertainties are missing.
Table 2: again, what are statistical errors?
Figure 9: What are the error bars? The symbols in the legend are very small and hard to see. The legend should list all aerosol types. It is confusing to have some of them listed in the legend, others in the caption. The plots would be easier to read without aerosol types which are not used in this study (like volcanic ash). This would allow to plot the existing pollution points in gray, allowing the pollution data points of this study to be plotted in black for better contrast, especially in the left panel. Another option for clarification would be not to show all individual previous data points, but only cluster boundaries as polygon lines.
Citation: https://doi.org/10.5194/egusphere-2024-140-RC2 -
AC1: 'Reply on RC2', Silke Gross, 05 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-140/egusphere-2024-140-AC1-supplement.pdf
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
413 | 173 | 104 | 690 | 19 | 14 |
- HTML: 413
- PDF: 173
- XML: 104
- Total: 690
- BibTeX: 19
- EndNote: 14
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1