the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Validation of Aeolus L2B products over the tropical Atlantic using radiosondes
Abstract. Since its launch by the European Space Agency in 2018, the Aeolus satellite has been using the first Doppler wind lidar in space to acquire three-dimensional atmospheric wind profiles around the globe. Especially in the tropics, these measurements compensate for the currently limited number of other wind observations, making an assessment of the quality of Aeolus wind products in this region crucial for numerical weather prediction. To evaluate the quality of the Aeolus L2B wind products across the tropical Atlantic Ocean, 20 radiosondes corresponding to Aeolus overpasses were launched from the islands of Sal, Saint Croix and Puerto Rico during August–September 2021 as part of the Joint Aeolus Tropical Atlantic Campaign. During this period, Aeolus sampled winds within a complex environment with a variety of cloud types in the vicinity of the Inter-tropical Convergence Zone and aerosol particles from Saharan dust outbreaks. On average, the validation for Aeolus Raleigh-clear revealed a random error of 3.8–4.3 ms−1 between 2–16 km and 4.3–4.8 ms−1 between 16–20 km, with a systematic error of −0.5 ± 0.2 ms−1. For Mie-cloudy, the random error between 2–16 km is 1.1–2.3 ms−1 and the systematic error is -0.9 ± 0.3 ms−1. Below clouds or within dust layers, the quality of Rayleigh-clear measurements can be degraded when the useful signal is reduced. In these conditions, we also noticed an underestimation of the L2B estimated error. Gross outliers which we define with large deviations from the radiosonde but low error estimates account for less than 5 % of the data. These outliers appear at all altitudes and under all environmental conditions; however, their root-cause remains unknown. Finally, we confirm the presence of an orbital-dependent bias of up to 2.5 ms−1 observed with both radiosondes and European Centre for Medium-Range Weather Forecasts model equivalents. The results of this study contribute to a better characterization of the Aeolus wind product in different atmospheric conditions and provide valuable information for further improvement of the wind retrieval algorithm.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(8100 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(8100 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-742', Anonymous Referee #1, 02 Jun 2023
The authors present statistics of the validation of Aeolus winds against independent ECMWF model fields and radiosondes. This is important work to gain knowledge on the errors of Aeolus winds. The region used for validation is limited to the tropics, which on the other hand is a very interesting region because of the challenging weather conditions with dust events and convective clouds and because of limited other Aeolus related Cal/Val campaigns in this region.
Major comments
==============
G1) At many places in the paper, the authors compare MADI against EEtot. This is a fundamental mistake as MADI is not a metric related to standard deviation such as EEtot (and SMAD). The authors can confirm this by taking a sample of random numbers, with normal (Gaussian) distribution, and compare the MADI value with the input standard deviation value.G2) The authors should be more strong on their main conclusion in the abstract, by ending e.g. with: "Based on the data used in this study Aeolus Rayleigh winds do not meet the mission random error requirement and Mie winds do most likely not meet the mission bias requirement."
G3) The classes discussed in lines 171 to 174 are very unclear. For instance, what is meant with "below 3 km (very high, high, mid-level, low, very low and fractional cloud types)"? How can you have very high clouds below 3 km?
Also, why not using the useful signal at measurement level to identify clouds within the profile?G4) line 195. Did you check this statement, e.g., using spectra following Skamarock (2008). They show that the area below the kinetic energy spectrum (which is actually the atmospheric variability over the integrated scales) can be quite substantial when starting at 340 km (or truncation wavenumber 60).
General comments
================
line 11; measurements -> observations (Note that for Aeolus an observation is the result of accumulated measurements; mixing these terms in the text is confusing. Please correct everywhere in the text accordingly)
line 15; the orbital-dependent bias of up to 2.5 m/s applies to only some parts of the atmosphere. This nuance should be made here.
line 33; "..... along the LOS of the instrument, which is directed perpendicular to the direction of satellite propagation. Please add the last part.
line 54: Replace: "..... that still needs to be explored ..... potentially affecting ....." => .... that needs further exploration .... which impact ....
line 117: replace ".... some SRs ....." by ".... small SR values, which are dominated by instrument noise, ...."
line 120; I do not understand what you mean with ".... and distances between the instruments and the height bins"? Please explain or rephrase.
line 123: ".... especially in the case of strong Mie returns, which are not detected by the classification procedure, ...." The addition is important because in principle measurements with strong returns should be classified as "cloudy" and not enter the Rayleigh-clear wind.
line 138-139; vertical resolution is not in m/s. Probably you mean that the balloon ascending speed is 5 m/s, then measuring every 2 seconds gives a vertical resolution of 10 m. Please correct.
line 216; this a surrogate for the standard error, Right? Please use this more well-known terminology in statistics, rather than "uncertainty of the mean bias".
line 227; I guess the representativeness error is different for Mie and Rayleigh winds as they sample the atmosphere along different length scales, i.e., about 10-15 km for Mie and and about 90 km for Rayleigh, along the satellite track? Can the authors please comment on this?
line 239; with EE you mean EE_Aeolus as in Eq. (9), right? Please be consistent in the text
line 243; what do the authors mean with: "noise related to atmospheric temperature and pressure"? Do errors in these parameters lead to wind random errors or biases?Figure 1. red and orange are hard to discriminate. Please use a different color for orange (Rayleigh-cloudy).
Caption of figure 1, please mention explicitly that you used model equivalents from the model background (which did not (yet) use the radiosonde), see also line 129. This is important, obviously and good to mention again.line 275 mentions a STD of 2.1 m/s for sqrt(<HLOS_ECMWF-HLOS_RS>^2) at Rayleigh-clear locations. The same metric shows a value of 2.93 m/s at Mie-cloudy locations in line 279. That is quite a large difference for parameters with quite consistent and well-known error characteristics. Assuming that the quality of radiosonde observations is rather constant for the complete profile this suggests that ECMWF performs substantially worse at locations where Mie winds are found (lower troposphere) than at locations of Rayleigh-clear winds (upper troposphere, lower stratosphere). Or is this discrepancy simply a statistical effect due to the limited data set? Can the authors please comment?
line 283; "as most of the systematic and random errors seem to be specific to the Aeolus Rayleigh-clear winds". But in the text above you show that Mie-cloudy biases are larger than for Rayleigh-clear. Please correct.
line 284: "This stresses the need to identify the underlying potential error sources of Rayleigh clear observations with respect to the presence of clouds and dust aerosols ......"
Given the larger systematic errors in Mie-cloudy I would think that these are more sensitive to clouds and aerosols. The fact that random errors are larger for Rayleigh than Mie is pretty clear. Please comment.Table 2. sigma_mu is not defined in section 3.2. Please do.
line 304; "For Mie-cloudy, the systematic difference indicates a bias of 0.9±0.3 ms1, which is within the uncertainty range of the
ESA’s specification ..."
No, it is not, see major comment G2. Please correct.line 306; how do you arrive at 1.1-2.3 m/s? Following Eq.8 with sigma_rep = 1.5-2.5, sigma_RS=0.7 and sigma_tot=2.9, I end up with sigma_Aeolus in the range 1.3-2.4. Where do I go wrong? See also table 3.
line 315. I think AVATAR-T carries a 2 micron lidar, so measuring particles only. How can you compare these with Rayleigh-clear, measured in clean air conditions?
Figure 2. "Differences (dots) and average differences (lines)"
I cannot conclude from the plot that the line is the average value. For instance in the left panel at 17500 m, all blue dots are on the right hand side of the line. Similar issues appear at all altitudes.In the caption of Figure 2, mention Aeolus Rayleigh-clear winds.
line 351; How is it possible to have '+'s with EE values > 5 m/s in figure 3a?
line 356; "discrepancy". This discrepancy is expected, see major comment G1.Figure 3. The binning of the stepwise solid lines is not explained. Why does it go up to 8 m/s in 3a, while you have much less than 40 data points at this value.
line 407; "Table 4 describes the error dependency of the Rayleigh-clear observations with respect to the presence of clouds and dust"
This classification is not clear. Do the authors mean presence inside the bin or from bins aloft or both?line 415-417. MADI compared against EE is invalid, see major comment G1. The conclusion that: "EEtot in clear sky conditions is well calibrated, while it is becoming gradually too low with the increasing presence of clouds and dust." is not well explained.
Table 4. In the caption replace 25% by 50%
Figure 7a. use a different x-axis scaling to better visualize the differences between the curves, e.g., x in [-25,35]. Also for fig 8a and 9a.
Figure 7b, how do you arrive at the blue curve? And how the grey curves? Are the latter obtained from Eq.(9) with EE_Aeolus from the L2B product?
line 483; "with a minimum of 3.5 m/s ...". This value does not follow from fig 7b. Please correct.
Figure 8; the grey lines in b/c/d in look the same as in figure 7. Same for Figure 9. Despite the completely different scenes. Where do these curves come from?
Minor comments
==============
line 9; Raleigh -> Rayleigh
line 11; can be degraded -> are degraded
line 87; add "off-nadir" and "in the tropics" in "it points at 35 degrees off-nadir with an angle of ~10 degrees from the zonal direction in the tropics".
line 109; processing chain -> mission
line 112; L2bP 3.50 -> L2Bp version 3.50 (the rest of the paper uses L2B with all capitals)
line 113; "should is" - remove "should"
line 134; "Between the 7 and 28th ...". Correct to either: "Between 7 and 28 ...." or "Between the 7th and 28th of ....".
line 183; "in the presence of ...". Add "the"
line 200; remove "bin-to-bin"
line 209; Eq. (5) misses the index (i) between the brackets. Please correct.
line 232; "the the". Please correct
line 244; In contrary -> In contrast or Contrary? Please check.
line 473; black lines -> black lineCitation: https://doi.org/10.5194/egusphere-2023-742-RC1 -
AC1: 'Reply on RC1', Maurus Borne, 11 Sep 2023
Dear Reviewers,
We appreciate your thorough review of our manuscript and the valuable feedback provided.
Your comments have significantly contributed to the improvement of our work. Specifically, we have noted concerns regarding the readability of our figures, the incorrect comparison of MADI with EE, and the need for a clearer explanation of our cloud classification algorithm. Additionally, we understand the importance of providing a more explicit conclusion regarding whether our measurements meet ESA error requirements.
We are fully committed to addressing each of these points comprehensively and will incorporate the necessary revisions into our manuscript.
Once again, we thank you for your valuable input.
Sincerely,
Maurus Borne et alCitation: https://doi.org/10.5194/egusphere-2023-742-AC1 - AC4: 'Reply on RC1', Maurus Borne, 29 Sep 2023
-
AC1: 'Reply on RC1', Maurus Borne, 11 Sep 2023
-
RC2: 'Comment on egusphere-2023-742', Anonymous Referee #2, 07 Aug 2023
This manuscript compares AEOLUS wind measurements with coordinated radiosonde observations, targeting the overpasses of AEOLUS, during a two month tropical field campaign. The comparison allows the AEOLUS observations to be placed within a proper uncertainty context for future use by modelers and others. The paper is in mostly good shape, but there are a few problems that need to be fixed.
There is one major question unaddressed. The AEOLUS satellite has been in operation since 2018, so into its 5th year of measurements. With such a large data base it seems there should have been a number of near overpasses of the standard radiosonde network in the tropics during those 5 years. Is that not the case? If that is not the case it should be mentioned so the motivation for the dedicated radiosonde campaign is clear. If it is the case then such a study needs to be referenced, or the number of such previous coincidences needs to be mentioned and the reason for excluding them from this study. There is no mention of this in the literature review.
The other major complaints concern the figures. First the choice of symbol/line size, color, and faintness is poor. Data in the figures should not be hard to see at normal zoom levels, and not hard to distinguish between one set of data and another, but presently that is the case. In some figures the lines indicating the data are practically invisible, and the colors chosen are too close to each other. Second there is no need to repeat in the text the figure captions. Leave that in the figure. In the text discuss the figure, the reader will find the figure caption.
Further specific comments on these issues and a few others, along with suggested corrections follow here by line number. Text in the manuscript, or corrections to that text, are set off with ellipses. While I am willing to review a second version, that is not necessary assuming the authors make a good faith effort to address these comments.
87 … with an angle of …
100 … respectively. The 87 km is required by the lower …
109 It would be helpful to briefly mention what particles are being observed for the Mie-clear observations. Later we find that Mie-clear is not used. Why introduce a classification that is not used for obvious reasons? Mie-clear must have no particles for scattering, so how can it work. Leave it out.
112 Is this product identified by two numbers, 12 and L2bP 3.50? This is a bit confusing. Are both numbers important for the reader? We find later neither is used further.
113 It is not surprising that the Mie-clear signal is weak, which harkens back to line 109. This should be dealt with all at once. Not piecemeal. In addition there is a problem with this sentence related to the word “should”.
125-126 Does (4d-EnVar) have to be defined twice?
Table 1 – Of what importance is the weekday? More important would be the dates it seems. The times are very tight for Aeolus, usually a span of one minute. But is the orbit of Aeolus that stable that it would always be 50, or 180, or … km away from the sounding location on every profile on a given week day at exactly the same time? This needs explanation. It seems there would be some variability for soundings on different days, and some variability on the coincidence radius.
135-136 KIT has already been defined, so use it. If an acronyms is not going to be used, don’t define it. In fact I don’t think KIT is used again.
137 Aren’t all weather radiosondes light these days?
141 Similarly, NASA has already been defined.
149 Don’t all weather radiosondes provide wind speed, wind direction, temperature, humidity and air pressure?
171-173 Why is very high/high included for clouds below 7 km? Similarly if very high is for clouds above 16 km, why is it included for clouds between 7 and 16 km? Why are these classifications even mentioned? They are never used again.
182 …80 km and a time resolution of 6 hours …
192 What is meant by the radiosonde total horizontal wind speed? Isn’t the radiosonde wind speed averaged over each Aeolus height bin?
229 Generally ms-1 means per millisecond, whereas m/s is usually written as m s-1.
244 … In contrast, the Mie …
247 How does a wind product get a validity flag of 0? Don’t the authors just mean that, … all Aeolus wind products with EE above 8 ms–1 for Rayleigh and 4 ms–1 for Mie, are omitted. … Why introduce a validity flag which is just a reflection of these criteria. The criteria mean something. The validity flag doesn’t and is never mentioned again.
267 Note that Mie-clear is not included here, which begs the question of why it was ever mentioned.
4.1.1 Comparative analysis with the ECMWF model equivalents – Isn’t the comparison primarily between AEOLUS and the radiosondes? The ECMWF in the title is a bit confusing. It should be pointed out whether the ECMWF model equivalents have incorporated the sounding data, which was added to the GTS as mentioned earlier.
Figure 1. Don’t use red and orange for two colors, particularly when the red has an orange tint. Use red and green or black, something that can be clearly distinguished. Make the symbols larger.
269-274. Don’t repeat the figure caption in the text. Let the figure caption do its job.
Table 3 In the caption introduce the quantities in the same order as they appear in the Table, as was done in Table 2, for consideration of the reader, not in the reverse order as done here.
Figure 2 and its caption need work.
1) The caption puts so many qualifiers in the first sentence that the reader is unsure what difference is shown. The caption should read something like. Differences between Aeolus (O) and radiosonde (RS) wind observations (dots) for a) Sal and b) PR/SRCX for descending (blue) and ascending (red) profiles. The solid line and shading are the average differences and their standard deviations. Average differences with ECMWF model equivalents (B) are given as dotted lines. If this is correct, It is presently so confusing it is hard to be sure.
2) The individual differences (dots) are too faint to be seen clearly.
3) The dots and the averages and standard deviations are not consistent. The dots show much more spread than indicated by the standard deviations and are not consistent with the averages. For example, consider the descending profile in b) between 5 and 10 km. There are 2-3 blue dots below 0 m/s and 10 or more dots > 0 m/s, with a range of 2-8 m/s, yet the average line is between 0 and 2 m/s. Something is wrong.
4) It is not clear why the difference has to be multiplied by -1 for descending profiles. That just confuses the comparison, leaving the reader with the need to invert the descending profiles to compare with the ascending profiles. It is also not clear why this difference has to conform with the sign convention of the model coordinate system.
324-328 While the text does a somewhat better job than the figure caption there is still no need to repeat a figure caption in the text. Here is where the need to conform to the model convention and how this limits the ability to compare the ascending and descending profiles needs to be explained.
328-329 So, considering the -1 multiplication of the descending differences, isn’t this difference from -2.5 to 2.5 m/s? The fact that ascending and descending below 5 km both appear above 0 m/s in a) is a bit misleading? If they agree they should appear on opposite sides of 0 m/s as in b), correct?
Fig. 3 the cloudy colors are so faint as to be very difficult to see against the dominant blue. Use bright colors. If the Rayleigh-cloudy outlier symbols are the same size as the Rayleigh-clear outlier, then there are no such points on the plot. Don’t include a legend for points that don’t appear.
422, 451, 460 Where are the transparent symbols? Are these the fainter symbols? This criteria is not defined in the figure caption, or in a legend on the figure, but should be. At present the reader does not know which symbols these are. There are already too many colors on the plot to distinguish a transparent symbol. How is transparent brown different than say yellow or orange? Use a different symbol mark: box, open circle, cross.
450 What is a. u.? No panel in Fig. 6 has a scale extending to 5e13.
458 kgkg-1 ?
464 Why show measurements which are artifacts and clearly wrong. Leave them out.
Fig. 7 Make the data visible! Use thicker lines. Use a darker gray so the reader can see all the data. There is a lot of space on the figure, don’t make it difficult to see the data.
Fig. 7e) legend …High semitransparent meanly thick clouds …? Do the authors mean mainly?
479-480 … it is not surprising … Rayleigh-cloud measurements ... Isn’t this obvious? Surely this aspect of the algorithms have been clearly checked for assurance that clear sky conditions are determined.
484 … with to a …?
489 In Table 1 the co-location radius was listed as 50 km, now it is 60 km. Recall the earlier comment about Table 1 and how the orbits could be so consistent with the co-location radii listed.
514 … However in panel 9d, we see …
532-533 Isn’t this a little surprising. At large co-location radii there are going to be differences just due to geophysical variations over such a large distance. The atmosphere is not that homogeneous over distances that large.
554-556 Isn’t this expected almost by definition. The signal is going to be cleaner without clouds so the instrument will perform better. Inherently Rayleigh-cloudy is going to give a weaker signal.
Citation: https://doi.org/10.5194/egusphere-2023-742-RC2 -
AC2: 'Reply on RC2', Maurus Borne, 11 Sep 2023
Dear Reviewers,
We appreciate your thorough review of our manuscript and the valuable feedback provided.
Your comments have significantly contributed to the improvement of our work. Specifically, we have noted concerns regarding the readability of our figures, the incorrect comparison of MADI with EE, and the need for a clearer explanation of our cloud classification algorithm. Additionally, we understand the importance of providing a more explicit conclusion regarding whether our measurements meet ESA error requirements.
We are fully committed to addressing each of these points comprehensively and will incorporate the necessary revisions into our manuscript.
Once again, we thank you for your valuable input.
Sincerely,
Maurus Borne et alCitation: https://doi.org/10.5194/egusphere-2023-742-AC2 - AC3: 'Reply on RC2', Maurus Borne, 29 Sep 2023
-
AC2: 'Reply on RC2', Maurus Borne, 11 Sep 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-742', Anonymous Referee #1, 02 Jun 2023
The authors present statistics of the validation of Aeolus winds against independent ECMWF model fields and radiosondes. This is important work to gain knowledge on the errors of Aeolus winds. The region used for validation is limited to the tropics, which on the other hand is a very interesting region because of the challenging weather conditions with dust events and convective clouds and because of limited other Aeolus related Cal/Val campaigns in this region.
Major comments
==============
G1) At many places in the paper, the authors compare MADI against EEtot. This is a fundamental mistake as MADI is not a metric related to standard deviation such as EEtot (and SMAD). The authors can confirm this by taking a sample of random numbers, with normal (Gaussian) distribution, and compare the MADI value with the input standard deviation value.G2) The authors should be more strong on their main conclusion in the abstract, by ending e.g. with: "Based on the data used in this study Aeolus Rayleigh winds do not meet the mission random error requirement and Mie winds do most likely not meet the mission bias requirement."
G3) The classes discussed in lines 171 to 174 are very unclear. For instance, what is meant with "below 3 km (very high, high, mid-level, low, very low and fractional cloud types)"? How can you have very high clouds below 3 km?
Also, why not using the useful signal at measurement level to identify clouds within the profile?G4) line 195. Did you check this statement, e.g., using spectra following Skamarock (2008). They show that the area below the kinetic energy spectrum (which is actually the atmospheric variability over the integrated scales) can be quite substantial when starting at 340 km (or truncation wavenumber 60).
General comments
================
line 11; measurements -> observations (Note that for Aeolus an observation is the result of accumulated measurements; mixing these terms in the text is confusing. Please correct everywhere in the text accordingly)
line 15; the orbital-dependent bias of up to 2.5 m/s applies to only some parts of the atmosphere. This nuance should be made here.
line 33; "..... along the LOS of the instrument, which is directed perpendicular to the direction of satellite propagation. Please add the last part.
line 54: Replace: "..... that still needs to be explored ..... potentially affecting ....." => .... that needs further exploration .... which impact ....
line 117: replace ".... some SRs ....." by ".... small SR values, which are dominated by instrument noise, ...."
line 120; I do not understand what you mean with ".... and distances between the instruments and the height bins"? Please explain or rephrase.
line 123: ".... especially in the case of strong Mie returns, which are not detected by the classification procedure, ...." The addition is important because in principle measurements with strong returns should be classified as "cloudy" and not enter the Rayleigh-clear wind.
line 138-139; vertical resolution is not in m/s. Probably you mean that the balloon ascending speed is 5 m/s, then measuring every 2 seconds gives a vertical resolution of 10 m. Please correct.
line 216; this a surrogate for the standard error, Right? Please use this more well-known terminology in statistics, rather than "uncertainty of the mean bias".
line 227; I guess the representativeness error is different for Mie and Rayleigh winds as they sample the atmosphere along different length scales, i.e., about 10-15 km for Mie and and about 90 km for Rayleigh, along the satellite track? Can the authors please comment on this?
line 239; with EE you mean EE_Aeolus as in Eq. (9), right? Please be consistent in the text
line 243; what do the authors mean with: "noise related to atmospheric temperature and pressure"? Do errors in these parameters lead to wind random errors or biases?Figure 1. red and orange are hard to discriminate. Please use a different color for orange (Rayleigh-cloudy).
Caption of figure 1, please mention explicitly that you used model equivalents from the model background (which did not (yet) use the radiosonde), see also line 129. This is important, obviously and good to mention again.line 275 mentions a STD of 2.1 m/s for sqrt(<HLOS_ECMWF-HLOS_RS>^2) at Rayleigh-clear locations. The same metric shows a value of 2.93 m/s at Mie-cloudy locations in line 279. That is quite a large difference for parameters with quite consistent and well-known error characteristics. Assuming that the quality of radiosonde observations is rather constant for the complete profile this suggests that ECMWF performs substantially worse at locations where Mie winds are found (lower troposphere) than at locations of Rayleigh-clear winds (upper troposphere, lower stratosphere). Or is this discrepancy simply a statistical effect due to the limited data set? Can the authors please comment?
line 283; "as most of the systematic and random errors seem to be specific to the Aeolus Rayleigh-clear winds". But in the text above you show that Mie-cloudy biases are larger than for Rayleigh-clear. Please correct.
line 284: "This stresses the need to identify the underlying potential error sources of Rayleigh clear observations with respect to the presence of clouds and dust aerosols ......"
Given the larger systematic errors in Mie-cloudy I would think that these are more sensitive to clouds and aerosols. The fact that random errors are larger for Rayleigh than Mie is pretty clear. Please comment.Table 2. sigma_mu is not defined in section 3.2. Please do.
line 304; "For Mie-cloudy, the systematic difference indicates a bias of 0.9±0.3 ms1, which is within the uncertainty range of the
ESA’s specification ..."
No, it is not, see major comment G2. Please correct.line 306; how do you arrive at 1.1-2.3 m/s? Following Eq.8 with sigma_rep = 1.5-2.5, sigma_RS=0.7 and sigma_tot=2.9, I end up with sigma_Aeolus in the range 1.3-2.4. Where do I go wrong? See also table 3.
line 315. I think AVATAR-T carries a 2 micron lidar, so measuring particles only. How can you compare these with Rayleigh-clear, measured in clean air conditions?
Figure 2. "Differences (dots) and average differences (lines)"
I cannot conclude from the plot that the line is the average value. For instance in the left panel at 17500 m, all blue dots are on the right hand side of the line. Similar issues appear at all altitudes.In the caption of Figure 2, mention Aeolus Rayleigh-clear winds.
line 351; How is it possible to have '+'s with EE values > 5 m/s in figure 3a?
line 356; "discrepancy". This discrepancy is expected, see major comment G1.Figure 3. The binning of the stepwise solid lines is not explained. Why does it go up to 8 m/s in 3a, while you have much less than 40 data points at this value.
line 407; "Table 4 describes the error dependency of the Rayleigh-clear observations with respect to the presence of clouds and dust"
This classification is not clear. Do the authors mean presence inside the bin or from bins aloft or both?line 415-417. MADI compared against EE is invalid, see major comment G1. The conclusion that: "EEtot in clear sky conditions is well calibrated, while it is becoming gradually too low with the increasing presence of clouds and dust." is not well explained.
Table 4. In the caption replace 25% by 50%
Figure 7a. use a different x-axis scaling to better visualize the differences between the curves, e.g., x in [-25,35]. Also for fig 8a and 9a.
Figure 7b, how do you arrive at the blue curve? And how the grey curves? Are the latter obtained from Eq.(9) with EE_Aeolus from the L2B product?
line 483; "with a minimum of 3.5 m/s ...". This value does not follow from fig 7b. Please correct.
Figure 8; the grey lines in b/c/d in look the same as in figure 7. Same for Figure 9. Despite the completely different scenes. Where do these curves come from?
Minor comments
==============
line 9; Raleigh -> Rayleigh
line 11; can be degraded -> are degraded
line 87; add "off-nadir" and "in the tropics" in "it points at 35 degrees off-nadir with an angle of ~10 degrees from the zonal direction in the tropics".
line 109; processing chain -> mission
line 112; L2bP 3.50 -> L2Bp version 3.50 (the rest of the paper uses L2B with all capitals)
line 113; "should is" - remove "should"
line 134; "Between the 7 and 28th ...". Correct to either: "Between 7 and 28 ...." or "Between the 7th and 28th of ....".
line 183; "in the presence of ...". Add "the"
line 200; remove "bin-to-bin"
line 209; Eq. (5) misses the index (i) between the brackets. Please correct.
line 232; "the the". Please correct
line 244; In contrary -> In contrast or Contrary? Please check.
line 473; black lines -> black lineCitation: https://doi.org/10.5194/egusphere-2023-742-RC1 -
AC1: 'Reply on RC1', Maurus Borne, 11 Sep 2023
Dear Reviewers,
We appreciate your thorough review of our manuscript and the valuable feedback provided.
Your comments have significantly contributed to the improvement of our work. Specifically, we have noted concerns regarding the readability of our figures, the incorrect comparison of MADI with EE, and the need for a clearer explanation of our cloud classification algorithm. Additionally, we understand the importance of providing a more explicit conclusion regarding whether our measurements meet ESA error requirements.
We are fully committed to addressing each of these points comprehensively and will incorporate the necessary revisions into our manuscript.
Once again, we thank you for your valuable input.
Sincerely,
Maurus Borne et alCitation: https://doi.org/10.5194/egusphere-2023-742-AC1 - AC4: 'Reply on RC1', Maurus Borne, 29 Sep 2023
-
AC1: 'Reply on RC1', Maurus Borne, 11 Sep 2023
-
RC2: 'Comment on egusphere-2023-742', Anonymous Referee #2, 07 Aug 2023
This manuscript compares AEOLUS wind measurements with coordinated radiosonde observations, targeting the overpasses of AEOLUS, during a two month tropical field campaign. The comparison allows the AEOLUS observations to be placed within a proper uncertainty context for future use by modelers and others. The paper is in mostly good shape, but there are a few problems that need to be fixed.
There is one major question unaddressed. The AEOLUS satellite has been in operation since 2018, so into its 5th year of measurements. With such a large data base it seems there should have been a number of near overpasses of the standard radiosonde network in the tropics during those 5 years. Is that not the case? If that is not the case it should be mentioned so the motivation for the dedicated radiosonde campaign is clear. If it is the case then such a study needs to be referenced, or the number of such previous coincidences needs to be mentioned and the reason for excluding them from this study. There is no mention of this in the literature review.
The other major complaints concern the figures. First the choice of symbol/line size, color, and faintness is poor. Data in the figures should not be hard to see at normal zoom levels, and not hard to distinguish between one set of data and another, but presently that is the case. In some figures the lines indicating the data are practically invisible, and the colors chosen are too close to each other. Second there is no need to repeat in the text the figure captions. Leave that in the figure. In the text discuss the figure, the reader will find the figure caption.
Further specific comments on these issues and a few others, along with suggested corrections follow here by line number. Text in the manuscript, or corrections to that text, are set off with ellipses. While I am willing to review a second version, that is not necessary assuming the authors make a good faith effort to address these comments.
87 … with an angle of …
100 … respectively. The 87 km is required by the lower …
109 It would be helpful to briefly mention what particles are being observed for the Mie-clear observations. Later we find that Mie-clear is not used. Why introduce a classification that is not used for obvious reasons? Mie-clear must have no particles for scattering, so how can it work. Leave it out.
112 Is this product identified by two numbers, 12 and L2bP 3.50? This is a bit confusing. Are both numbers important for the reader? We find later neither is used further.
113 It is not surprising that the Mie-clear signal is weak, which harkens back to line 109. This should be dealt with all at once. Not piecemeal. In addition there is a problem with this sentence related to the word “should”.
125-126 Does (4d-EnVar) have to be defined twice?
Table 1 – Of what importance is the weekday? More important would be the dates it seems. The times are very tight for Aeolus, usually a span of one minute. But is the orbit of Aeolus that stable that it would always be 50, or 180, or … km away from the sounding location on every profile on a given week day at exactly the same time? This needs explanation. It seems there would be some variability for soundings on different days, and some variability on the coincidence radius.
135-136 KIT has already been defined, so use it. If an acronyms is not going to be used, don’t define it. In fact I don’t think KIT is used again.
137 Aren’t all weather radiosondes light these days?
141 Similarly, NASA has already been defined.
149 Don’t all weather radiosondes provide wind speed, wind direction, temperature, humidity and air pressure?
171-173 Why is very high/high included for clouds below 7 km? Similarly if very high is for clouds above 16 km, why is it included for clouds between 7 and 16 km? Why are these classifications even mentioned? They are never used again.
182 …80 km and a time resolution of 6 hours …
192 What is meant by the radiosonde total horizontal wind speed? Isn’t the radiosonde wind speed averaged over each Aeolus height bin?
229 Generally ms-1 means per millisecond, whereas m/s is usually written as m s-1.
244 … In contrast, the Mie …
247 How does a wind product get a validity flag of 0? Don’t the authors just mean that, … all Aeolus wind products with EE above 8 ms–1 for Rayleigh and 4 ms–1 for Mie, are omitted. … Why introduce a validity flag which is just a reflection of these criteria. The criteria mean something. The validity flag doesn’t and is never mentioned again.
267 Note that Mie-clear is not included here, which begs the question of why it was ever mentioned.
4.1.1 Comparative analysis with the ECMWF model equivalents – Isn’t the comparison primarily between AEOLUS and the radiosondes? The ECMWF in the title is a bit confusing. It should be pointed out whether the ECMWF model equivalents have incorporated the sounding data, which was added to the GTS as mentioned earlier.
Figure 1. Don’t use red and orange for two colors, particularly when the red has an orange tint. Use red and green or black, something that can be clearly distinguished. Make the symbols larger.
269-274. Don’t repeat the figure caption in the text. Let the figure caption do its job.
Table 3 In the caption introduce the quantities in the same order as they appear in the Table, as was done in Table 2, for consideration of the reader, not in the reverse order as done here.
Figure 2 and its caption need work.
1) The caption puts so many qualifiers in the first sentence that the reader is unsure what difference is shown. The caption should read something like. Differences between Aeolus (O) and radiosonde (RS) wind observations (dots) for a) Sal and b) PR/SRCX for descending (blue) and ascending (red) profiles. The solid line and shading are the average differences and their standard deviations. Average differences with ECMWF model equivalents (B) are given as dotted lines. If this is correct, It is presently so confusing it is hard to be sure.
2) The individual differences (dots) are too faint to be seen clearly.
3) The dots and the averages and standard deviations are not consistent. The dots show much more spread than indicated by the standard deviations and are not consistent with the averages. For example, consider the descending profile in b) between 5 and 10 km. There are 2-3 blue dots below 0 m/s and 10 or more dots > 0 m/s, with a range of 2-8 m/s, yet the average line is between 0 and 2 m/s. Something is wrong.
4) It is not clear why the difference has to be multiplied by -1 for descending profiles. That just confuses the comparison, leaving the reader with the need to invert the descending profiles to compare with the ascending profiles. It is also not clear why this difference has to conform with the sign convention of the model coordinate system.
324-328 While the text does a somewhat better job than the figure caption there is still no need to repeat a figure caption in the text. Here is where the need to conform to the model convention and how this limits the ability to compare the ascending and descending profiles needs to be explained.
328-329 So, considering the -1 multiplication of the descending differences, isn’t this difference from -2.5 to 2.5 m/s? The fact that ascending and descending below 5 km both appear above 0 m/s in a) is a bit misleading? If they agree they should appear on opposite sides of 0 m/s as in b), correct?
Fig. 3 the cloudy colors are so faint as to be very difficult to see against the dominant blue. Use bright colors. If the Rayleigh-cloudy outlier symbols are the same size as the Rayleigh-clear outlier, then there are no such points on the plot. Don’t include a legend for points that don’t appear.
422, 451, 460 Where are the transparent symbols? Are these the fainter symbols? This criteria is not defined in the figure caption, or in a legend on the figure, but should be. At present the reader does not know which symbols these are. There are already too many colors on the plot to distinguish a transparent symbol. How is transparent brown different than say yellow or orange? Use a different symbol mark: box, open circle, cross.
450 What is a. u.? No panel in Fig. 6 has a scale extending to 5e13.
458 kgkg-1 ?
464 Why show measurements which are artifacts and clearly wrong. Leave them out.
Fig. 7 Make the data visible! Use thicker lines. Use a darker gray so the reader can see all the data. There is a lot of space on the figure, don’t make it difficult to see the data.
Fig. 7e) legend …High semitransparent meanly thick clouds …? Do the authors mean mainly?
479-480 … it is not surprising … Rayleigh-cloud measurements ... Isn’t this obvious? Surely this aspect of the algorithms have been clearly checked for assurance that clear sky conditions are determined.
484 … with to a …?
489 In Table 1 the co-location radius was listed as 50 km, now it is 60 km. Recall the earlier comment about Table 1 and how the orbits could be so consistent with the co-location radii listed.
514 … However in panel 9d, we see …
532-533 Isn’t this a little surprising. At large co-location radii there are going to be differences just due to geophysical variations over such a large distance. The atmosphere is not that homogeneous over distances that large.
554-556 Isn’t this expected almost by definition. The signal is going to be cleaner without clouds so the instrument will perform better. Inherently Rayleigh-cloudy is going to give a weaker signal.
Citation: https://doi.org/10.5194/egusphere-2023-742-RC2 -
AC2: 'Reply on RC2', Maurus Borne, 11 Sep 2023
Dear Reviewers,
We appreciate your thorough review of our manuscript and the valuable feedback provided.
Your comments have significantly contributed to the improvement of our work. Specifically, we have noted concerns regarding the readability of our figures, the incorrect comparison of MADI with EE, and the need for a clearer explanation of our cloud classification algorithm. Additionally, we understand the importance of providing a more explicit conclusion regarding whether our measurements meet ESA error requirements.
We are fully committed to addressing each of these points comprehensively and will incorporate the necessary revisions into our manuscript.
Once again, we thank you for your valuable input.
Sincerely,
Maurus Borne et alCitation: https://doi.org/10.5194/egusphere-2023-742-AC2 - AC3: 'Reply on RC2', Maurus Borne, 29 Sep 2023
-
AC2: 'Reply on RC2', Maurus Borne, 11 Sep 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
365 | 130 | 33 | 528 | 17 | 17 |
- HTML: 365
- PDF: 130
- XML: 33
- Total: 528
- BibTeX: 17
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Peter Knippertz
Martin Weissmann
Benjamin Witschas
Cyrille Flamant
Rosimar Rios-Berrios
Peter Veals
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(8100 KB) - Metadata XML