the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Validation of the Aeolus L2A products with the eVe reference lidar measurements from the ASKOS/JATAC campaign
Abstract. Aeolus was an ESA Earth Explorer satellite mission launched in 2018 with a lifetime of almost five years. The mission carried the Atmospheric Laser Doppler Instrument (ALADIN), a doppler wind lidar for providing wind profiles in global scale and also vertically resolved optical properties of particles (aerosols and clouds) using the high spectral resolution lidar technique. To validate the particles’ optical properties obtained from Aeolus as Level 2A products, the eVe lidar, ESA’s reference system for the calibration and validation of Aeolus mission, has been deployed at the ASKOS campaign in the framework of the Joint Aeolus Tropical Atlantic Campaign (JATAC). ASKOS is the ground-based component of JATAC where ground-based remote sensing and in-situ instrumentation for aerosols, clouds, winds and radiation observations has been deployed at Cado Verde during summer 2021 and 2022 for the validation of the Aeolus products. The eVe lidar is a combined linear/circular polarization and Raman lidar specifically designed to mimic the operation of Aeolus and provide ground-based reference measurements of the optical properties for aerosols and thin clouds for the validation of the Aeolus L2A products while taking into consideration the ALADIN’s limitation of misdetection of the cross-polar component of the backscattered signal. As such, in the validation study the Aeolus L2A profiles obtained from the Standard Correct Algorithm (SCA), the Maximum Likelihood Estimation (MLE), and the AEL–PRO algorithms of Baseline 16 and free from the cloud contaminated bins are compared against the corresponding cloud-free Aeolus like profiles from eVe lidar, which are harmonized to the Aeolus L2A profiles, using the 14 collocated measurements between eVe and Aeolus during the nearest Aeolus overpass from the ASKOS site. The validation results reveal good performance for the co-polar particle backscatter coefficient being the most accurate L2A product from Aeolus with overall errors up to 2 Mm-1sr-1, followed by the noisier particle extinction coefficient with overall errors up to 183 Mm-1, and the co-polar lidar ratio which is the noisiest L2A product with extreme error values and variability. The observed discrepancies between eVe and Aeolus L2A profiles increase at lower altitudes where higher atmospheric loads, which lead to increased noise levels in the Aeolus retrievals due to enhanced laser beam attenuation, and greater atmospheric variability (e.g. PBL inhomogeneities) are typically encountered. Overall, this study underlines the strengths of the optimal estimation algorithms (MLE and AEL–PRO) with consistent performance and reduced discrepancies, while the standard inversion algorithm (SCA), which was originally developed, could be further improved particularly in the retrieval of the particle extinction coefficient and lidar ratio. In addition, the SCA–Mid bin resolution profiles outperform the corresponding SCA–Rayleigh bin as expected, since Mid bin resolution is obtained when averaging the values from two consecutive SCA–Rayleigh height bins.
Competing interests: At least one of the (co-)authors is a member of the editorial board of Atmospheric Measurement Techniques
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.- Preprint
(2910 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2025-1152', Anonymous Referee #1, 16 May 2025
-
AC1: 'Reply on RC1', Peristera Paschou, 25 Jul 2025
Authors statement
We would like to thank the reviewer for their constructive comments. We acknowledge that addressing the comments and suggestions has led to significant improvements in the manuscript. All general and specific comments have been carefully addressed, and the corresponding revisions and insertions have been made in the manuscript. In the supplement (pdf file), we provide our point-by-point responses under each of reviewer’s comments (including the specific comments provided in a supplement).
-
AC1: 'Reply on RC1', Peristera Paschou, 25 Jul 2025
-
RC2: 'Comment on egusphere-2025-1152', Anonymous Referee #2, 25 May 2025
General comment:
This comprehensive study validates Aeolus Level 2A aerosol and cloud products, derived from three algorithms (SCA, MLE, AEL-PRO), against the ground-based eVe lidar during the ASKOS/JATAC campaign in Cabo Verde. It is the first manuscript to systematically compare all of the L2A algorithms using statistical analysis of backscatter, extinction coefficient, and lidar ratio across 14 collocated cases. The results show that overall errors are mainly due to random variability rather than systematic biases. The optimal estimation algorithms (MLE and AEL-PRO) outperform the standard SCA, particularly for the extinction and lidar ratio which are much more noisy compared to backscatter, with greater discrepancies at lower altitudes likely caused by atmospheric variability and signal attenuation.
The manuscript is well written, with a strong introduction that effectively places the study within the broader context. Also it nicely demonstrated the capabilities of Aeolus, the wind mission, being also suitable for aerosol research, with some limitations though. Furthermore, the study advertises the three Aeolus L2A algorithms and provides the readers some feelings about which algorithm performs good and not to good in different conditions.
The results section includes an overwhelming amount of numerical detail in the text, which makes it difficult to follow. A clearer structure with some numbers taken out in a table would improve readability. Additionally, the discussion lacks depth regarding the implications of the findings; a more thorough interpretation of the results would enhance the scientific impact. In the conclusion, rather than reiterating the results, it would be more valuable to provide insight into what these findings mean for past and future satellite missions, especially in view of long-term aerosol research.
I am confident that after some further effort in refining the discussions and conclusions, the manuscript’s impact will be significantly enhanced, thereby improving its scientific relevance for aerosol research. We therefore recommend the following minor revisions.
Detailed comments:
- Chapter “Methods and Datasets”: It describes the campaign and the datasets, but not methods. In the beginning of chapter 3, the collocated criteria and filtering is described and in the “Statistical analysis” chapter, the bias and random error metrics are described. I would recommend to restructure these parts a bit. Either move the collocation criteria and bias metrics part in the Methods chapter, or just call the first one “Datasets and campaign”.
- L.282: What about MLEsub ? It is available at finer horiz res. ~ 18km and was introduced within B16 reprocessed dataset. But you are not using it in your study. If it is out of the scope of the study, it is fine, but you should at least mention it and add a comment in the discussions, that for certain atmospheric conditions (low-level variability), it could improve the agreement between ground-based and satellite based comparison.
- Line 334: It is mentioned that the Aeolus L2A quality assurance flag is not used, because it reduces the available data significantly. However, it is recommended by the L2A user guide to use the QC flag. How would the results look like with the applied QC? Is it possible to quantify, how much more data would be rejected? Otherwise I would recommend to add a comment that justifies that the quality assurance flag has not been used.
- Chapter 3.2 Statistical analysis: For the case study in 3.1 it is very suitable to include the numbers in the running text, but for the statistical analysis, it is too heavy loaded with numbers. I would like to suggest to put the most important numbers in a table (although I am not quite sure how to best realise it because you are talking about different altitude ranges) and rather use in the text “overestimates”, strongest by SCA-mid, lowest for AEL-PRO.
- Not mandatory, but a recommendation to make the article better structured: 9x2 figures in the statistical analysis, this is an overwhelming amount of figures. If you want to keep them all in the manuscript, I would suggest to make 3 figures, 3x2 each (first row backscatter versus altitude, second row backscatter versus OOD, third row scattering ratio), next figure for extinction, third for lidar ratio. That would make the discussion of the results more logical in my opinion. Alternatively, keep it, but remove the lidar ratio analysis for OOD and scattering ratio with the explanation that the bias and RMSE is too high for all algorithms, which makes it challenging to draw conclusions on impact of OOD and scattering ratio on the lidar ratio.
- Conclusions: I would like to see a more in-depth discussion of the results. What are the implications of the findings for the long-term aerosol research? Is Aeolus suitable to fill global aerosol datasets together with CALIOP and EarthCARE? How important would it be for an Aeolus-like future missions, to include a depolarisation channel? What are the main lessons learnt from this study? How representative is your comparison to other locations in the world and for other aerosol types?
Minor technical comments:
- “Aeolus like” > 23 matches in the manuscript > It should be “Aeolus-like”
- L.29: “The eVe lidar …” is a very long sentence. Try to split in two (the cross-polar component in the second sentence).
- L.36: “during the nearest“ Please specify the applied collocation criteria (within 100 km…)
- L.141: “the rest Aeolus algorithms” > rephrase to “the optimal estimation algorithms”
- L.199: The eVe lidar, deployed during the ASKOS/JATAC campaign, performed …
- L.205 : "14 collocated profiles". What about including profiles -1 and +1 in addition to direct overpass to consolidate the statistics ? Aeolus profiles may indeed be individually affected by low SNR and therefore discarded, especially in the lower troposphere. Maybe to be mentioned in the discussion of the results.
- Line 226: No need to mention the MCA, because you are not using this product. For your analysis, you don’t need to provide a full overview about the L2A product (This is done in the Flamant et al publications), only those parts you are using.
- L.277: add citation for L-BFGS-B c (Zhu et al., 1997)
- L.332: "Cloud flag from SCA algorithm". Should point ATBD to detail it (based on model ice and liquid water content), and coming with total cloud backscatter threshold of 1.0 × 10 ^-7 m^-1sr^-^1
- L.375: When you restrict xlim for Fig.3 and Fig.4 (for backscatter and extinction), this may help analysing the profile deviations.
- L.536: lidar ratio opposite behaviour as extinction and backscatter: decrease in lidar ratio bias and random error with increasing optical depth could be explained due to correlated errors in extinction and backscatter retrievals under strong attenuation, causing their ratio to appear more stable even as the individual quantities become less reliable. This might be a retrieval artefact.
- L.559: "cloud flagged bin from SCA". Like in line 332, should mention that it is a model based cloud screening.
- L.608: Best agreement for lidar ratio between Aeolus and eVe in high OOD > Same comment as in L.536: It’s certainly good to mention it, but you may add a comment that this behaviour should be analysed with care, because of potential lidar retrieval artefacts due to the correlated errors in extinction and backscatter.
Citation: https://doi.org/10.5194/egusphere-2025-1152-RC2 -
AC2: 'Reply on RC2', Peristera Paschou, 25 Jul 2025
Reply to Anonymous Reviewer #2
General comment:
This comprehensive study validates Aeolus Level 2A aerosol and cloud products, derived from three algorithms (SCA, MLE, AEL-PRO), against the ground-based eVe lidar during the ASKOS/JATAC campaign in Cabo Verde. It is the first manuscript to systematically compare all of the L2A algorithms using statistical analysis of backscatter, extinction coefficient, and lidar ratio across 14 collocated cases. The results show that overall errors are mainly due to random variability rather than systematic biases. The optimal estimation algorithms (MLE and AEL-PRO) outperform the standard SCA, particularly for the extinction and lidar ratio which are much more noisy compared to backscatter, with greater discrepancies at lower altitudes likely caused by atmospheric variability and signal attenuation.
The manuscript is well written, with a strong introduction that effectively places the study within the broader context. Also it nicely demonstrated the capabilities of Aeolus, the wind mission, being also suitable for aerosol research, with some limitations though. Furthermore, the study advertises the three Aeolus L2A algorithms and provides the readers some feelings about which algorithm performs good and not to good in different conditions.
The results section includes an overwhelming amount of numerical detail in the text, which makes it difficult to follow. A clearer structure with some numbers taken out in a table would improve readability. Additionally, the discussion lacks depth regarding the implications of the findings; a more thorough interpretation of the results would enhance the scientific impact. In the conclusion, rather than reiterating the results, it would be more valuable to provide insight into what these findings mean for past and future satellite missions, especially in view of long-term aerosol research.
I am confident that after some further effort in refining the discussions and conclusions, the manuscript’s impact will be significantly enhanced, thereby improving its scientific relevance for aerosol research. We therefore recommend the following minor revisions.
Author’s reply: We would like to thank the reviewer for their constructive comments. We acknowledge that, by addressing these comments and suggestions, the manuscript has been significantly improved. All points raised have been carefully considered, and the corresponding revisions and insertions have been made in the manuscript. A point-by-point response is provided below, along with and the revised manuscript showing track changes (attached pdf).
Detailed comments:
- Chapter “Methods and Datasets”: It describes the campaign and the datasets, but not methods. In the beginning of chapter 3, the collocated criteria and filtering is described and in the “Statistical analysis” chapter, the bias and random error metrics are described. I would recommend to restructure these parts a bit. Either move the collocation criteria and bias metrics part in the Methods chapter, or just call the first one “Datasets and campaign”.
Author’s reply: Chapter renamed to “Datasets and campaign”
- L.282: What about MLEsub ? It is available at finer horiz res. ~ 18km and was introduced within B16 reprocessed dataset. But you are not using it in your study. If it is out of the scope of the study, it is fine, but you should at least mention it and add a comment in the discussions, that for certain atmospheric conditions (low-level variability), it could improve the agreement between ground-based and satellite based comparison.
Author’s reply: Discussion about the MLEsub retrievals has been added after L282 and at ‘Conclusions’ section.
- Line 334: It is mentioned that the Aeolus L2A quality assurance flag is not used, because it reduces the available data significantly. However, it is recommended by the L2A user guide to use the QC flag. How would the results look like with the applied QC? Is it possible to quantify, how much more data would be rejected? Otherwise I would recommend to add a comment that justifies that the quality assurance flag has not been used.
Author’s reply: We decided not to include the Quality Check flag in our analysis, as i) it is available only for SCA and MLE retrievals resulting to imbalanced sample sizes between the SCA/MLE and the AEL-PRO and ii) the sample size is significantly reduced (mainly in the lower heights). A dedicated comment has been added in line 334 together with two indicative plots (as Appendix A) with the statistical results for the systematic errors for the backscatter and extinction while taking into account both the Cloud Mask and the Quality Check flags.
- Chapter 3.2 Statistical analysis: For the case study in 3.1 it is very suitable to include the numbers in the running text, but for the statistical analysis, it is too heavy loaded with numbers. I would like to suggest to put the most important numbers in a table (although I am not quite sure how to best realise it because you are talking about different altitude ranges) and rather use in the text “overestimates”, strongest by SCA-mid, lowest for AEL-PRO.
Author’s reply: Tables with the statistical results have been added as Appendix B.
- Not mandatory, but a recommendation to make the article better structured: 9x2 figures in the statistical analysis, this is an overwhelming amount of figures. If you want to keep them all in the manuscript, I would suggest to make 3 figures, 3x2 each (first row backscatter versus altitude, second row backscatter versus OOD, third row scattering ratio), next figure for extinction, third for lidar ratio. That would make the discussion of the results more logical in my opinion. Alternatively, keep it, but remove the lidar ratio analysis for OOD and scattering ratio with the explanation that the bias and RMSE is too high for all algorithms, which makes it challenging to draw conclusions on impact of OOD and scattering ratio on the lidar ratio.
Author’s reply: The figures have been re-organized. The lidar ratio plots for OOD and scattering ratio classes have been removed since the metrics exhibit extremely high values.
- Conclusions: I would like to see a more in-depth discussion of the results. What are the implications of the findings for the long-term aerosol research? Is Aeolus suitable to fill global aerosol datasets together with CALIOP and EarthCARE? How important would it be for an Aeolus-like future missions, to include a depolarisation channel? What are the main lessons learnt from this study? How representative is your comparison to other locations in the world and for other aerosol types?
Author’s Reply: The conclusions have been strongly modified, taking into account the related comments from Reviewer 2 (and Reviewer 1).
Minor technical comments:
- “Aeolus like” > 23 matches in the manuscript > It should be “Aeolus-like”
Author’s Reply: Change applied to all matches
- L.29: “The eVe lidar …” is a very long sentence. Try to split in two (the cross-polar component in the second sentence).
Author’s Reply: Revised. Sentence split in two.
- L.36: “during the nearest“ Please specify the applied collocation criteria (within 100 km…)
Author’s Reply: Revised. Collocation criteria added in the sentence.
- L.141: “the rest Aeolus algorithms” > rephrase to “the optimal estimation algorithms”
Author’s Reply: Rephrased
- L.199: The eVe lidar, deployed during the ASKOS/JATAC campaign, performed …
Author’s Reply: Revised by removing the “deployed during the ASKOS/JATAC campaign”.
- L.205 : "14 collocated profiles". What about including profiles -1 and +1 in addition to direct overpass to consolidate the statistics ? Aeolus profiles may indeed be individually affected by low SNR and therefore discarded, especially in the lower troposphere. Maybe to be mentioned in the discussion of the results.
Author’s Reply: We only considered the nearest Aeolus profile, because this guarantees that the atmospheric related error (due to inhomogeneities and small-scale aerosol structures and gradients that may occur, mainly, inside the PBL) is minimum and random for the characterization of the systematic error. As such, by including also ±1 profiles from the nearest one, we would risk favoring specific scenes (where more BRCs would be taken into account based on the spatial collocation criteria of 100km radius) which would affect the calculation of the random errors. A dedicated discussion has been added in section 4 (Discussion and conclusions).
- Line 226: No need to mention the MCA, because you are not using this product. For your analysis, you don’t need to provide a full overview about the L2A product (This is done in the Flamant et al publications), only those parts you are using.
Author’s Reply: The MCA algorithm has been mentioned in the concept of giving a historical overview of the available Aeolus algorithms. The authors believe that mentioning MCA in Line 226 should not be removed, but the brief discussion about MCA in Lines 234-242 has been removed.
- L.277: add citation for L-BFGS-B c (Zhu et al., 1997)
Author’s Reply: Citation added.
- L.332: "Cloud flag from SCA algorithm". Should point ATBD to detail it (based on model ice and liquid water content), and coming with total cloud backscatter threshold of 1.0 × 10 ^-7 m^-1sr^-^1
Author’s Reply: Short discussion on the SCA cloud flag has been added, citing the ATBD (Flamant et al., 2022) for more details.
- L.375: When you restrict xlim for Fig.3 and Fig.4 (for backscatter and extinction), this may help analysing the profile deviations.
Author’s Reply: The limits of x-axis of all affected figures have been updated accordingly.
- L.536: lidar ratio opposite behaviour as extinction and backscatter: decrease in lidar ratio bias and random error with increasing optical depth could be explained due to correlated errors in extinction and backscatter retrievals under strong attenuation, causing their ratio to appear more stable even as the individual quantities become less reliable. This might be a retrieval artefact.
Author’s Reply: The reviewer is right, about the correlated errors and lidar ratio retrieval, but in any case, the errors are extremely high to make any conclusions. Moreover, there is similar comment from Reviewer 1. As such, the related figures (12 and 13) have been removed and a brief discussion about the high variability of the lidar ratio errors per bin has been added in line 478.
- L.559: "cloud flagged bin from SCA". Like in line 332, should mention that it is a model based cloud screening.
Author’s Reply: Revised as in the comment at Line 332
- L.608: Best agreement for lidar ratio between Aeolus and eVe in high OOD > Same comment as in L.536: It’s certainly good to mention it, but you may add a comment that this behaviour should be analysed with care, because of potential lidar retrieval artefacts due to the correlated errors in extinction and backscatter.
Author’s Reply: Since the errors for lidar ratio are extremely large also in the analysis for different OOD and scattering ratio classes, figures 12 and 13 along with the extended discussion have been removed from the manuscript and a short discussion has been added instead, highlighting the low accuracy and high variability of the Aeolus lidar ratio retrievals.
Citation: https://doi.org/10.5194/egusphere-2025-1152-AC2
Status: closed
-
RC1: 'Comment on egusphere-2025-1152', Anonymous Referee #1, 16 May 2025
The manuscript by Paschou et al. provides a detailed insight into the validation of Aeolus spin-off concerning aerosol optical properties products (4 different products algorithms). For this purpose, a dedicated ground-based reference instrument was operated on the Cabo Verdean islands in the tropical Atlantic. The authors could accomplish 14 direct overpasses, which is remarkable given the repeat cycle of 7-days and the remote location. The authors discuss in detail one case study and perform a statistical analysis, for which they retrieve statistical metrics even though the data set is sparse with 14 overpasses and thus not fully significant for the overall Aeolus performance in terms of region, time, and aerosol conditions.
Nevertheless, it is the optimum approach given the data set and valuable for the community. Furthermore, they investigate their findings for additional boundary conditions like the overlaying optical depth and scattering ratio which is indeed of great interest. For this reason, the paper is in general well suited for AMT and this special issue. Nevertheless, I propose minor revision as I have some general comments which should be considered and a lot of minor, technical comments in the attached pdf.
General comments:
- Despite the great observations and general results, the publication lacks in my opinion a proper discussion of the results. The current manuscript stays mostly on reporting numbers of the comparison which is not appropriate for a reader not being a validation expert. At least an attempt should be made to contextualize the results in a broader view.
For example, please try to conclude, based on your analysis, which of the 4 algorithms is performing best. If none can be identified, and each has its pros and cons, this is also a conclusion. Also, if there is one algorithm you cannot recommend for specific aerosol studies. Basically, you operated the only one system measuring the same properties like Aeolus in the same viewing angle and thus you are allowed to make clear statements even though, of course, such statements might not be valid globally, for all baselines, and all Aeolus lifetime.
- In this context, the conclusion really needs to be overworked. Currently it is just a summary or repetition of the chapter above but not concluding. Please only highlight in the conclusion the important findings in a short and concise way and discuss them wrt the performance of Aeolus so that one can draw conclusions easily. Currently only the very last passage is a real conclusion.
- As stated, for the reader it would be beneficial to get some interpretation of the results guiding towards the performance of Aeolus. I know it is difficult and it might be only valid for the Cabo Verde Island areas, but still worth to state something like, based on our statistical analysis, studies of the dust layer can be performed using the xx algorithm when interested in layer heights but algorithms yyy seems to be more suited for the study of optical properties.
- I am not happy about the frequent and multiple use of the word “bias” which is partly incorrect. If you have a deviation of the retrieved values for a case study, you cannot claim it as a bias already. In fact, you could only claim a deviation when considering the uncertainty of both systems. A bias itself can only be provided, if you have a sufficient statistical analysis. Please correct this throughout the manuscript.
- I am really puzzled by the fact that you report no bias for the lidar ratio in cases with high atmospheric load (OOD 0.73 – 1.19 and scattering ratio 1.27 – 1.9). Either, the result is completely not statically relevant due the large uncertainty in bsc and ext as reported just before and just a coincidence. In this case you should leave the statement out. Or it is the fact that it is an intensive property, and thus, for the same aerosol type, spatial variability cancels out. Some discussion about this would be appreciated.
- Concerning the findings in the PBL, conclusions on the representativeness especially with regard to the coarse Aeolus resolution would be very welcome also because a follow-on mission is planned.
- In the introduction, many terms are used which are just defined later and thus it is partly not understandable for a non-expert. Please change this.
- I personally think that the nomenclature SCA-Rayleigh-bin is misleading. Maybe SCA-single-bin might be more appropriate?
- Please use a unique nomenclature throughout the manuscript. E.g. mean bias is multiple defined. Which is really confusing.
- Generally, please explain all abbreviations when used the first time.
- Furthermore, please check all your plots for color blind conformity. I think green, red, orange and brown are one of the worst-scenario colors for some people.
More specific comments are provided in the pdf.
-
AC1: 'Reply on RC1', Peristera Paschou, 25 Jul 2025
Authors statement
We would like to thank the reviewer for their constructive comments. We acknowledge that addressing the comments and suggestions has led to significant improvements in the manuscript. All general and specific comments have been carefully addressed, and the corresponding revisions and insertions have been made in the manuscript. In the supplement (pdf file), we provide our point-by-point responses under each of reviewer’s comments (including the specific comments provided in a supplement).
-
AC1: 'Reply on RC1', Peristera Paschou, 25 Jul 2025
-
RC2: 'Comment on egusphere-2025-1152', Anonymous Referee #2, 25 May 2025
General comment:
This comprehensive study validates Aeolus Level 2A aerosol and cloud products, derived from three algorithms (SCA, MLE, AEL-PRO), against the ground-based eVe lidar during the ASKOS/JATAC campaign in Cabo Verde. It is the first manuscript to systematically compare all of the L2A algorithms using statistical analysis of backscatter, extinction coefficient, and lidar ratio across 14 collocated cases. The results show that overall errors are mainly due to random variability rather than systematic biases. The optimal estimation algorithms (MLE and AEL-PRO) outperform the standard SCA, particularly for the extinction and lidar ratio which are much more noisy compared to backscatter, with greater discrepancies at lower altitudes likely caused by atmospheric variability and signal attenuation.
The manuscript is well written, with a strong introduction that effectively places the study within the broader context. Also it nicely demonstrated the capabilities of Aeolus, the wind mission, being also suitable for aerosol research, with some limitations though. Furthermore, the study advertises the three Aeolus L2A algorithms and provides the readers some feelings about which algorithm performs good and not to good in different conditions.
The results section includes an overwhelming amount of numerical detail in the text, which makes it difficult to follow. A clearer structure with some numbers taken out in a table would improve readability. Additionally, the discussion lacks depth regarding the implications of the findings; a more thorough interpretation of the results would enhance the scientific impact. In the conclusion, rather than reiterating the results, it would be more valuable to provide insight into what these findings mean for past and future satellite missions, especially in view of long-term aerosol research.
I am confident that after some further effort in refining the discussions and conclusions, the manuscript’s impact will be significantly enhanced, thereby improving its scientific relevance for aerosol research. We therefore recommend the following minor revisions.
Detailed comments:
- Chapter “Methods and Datasets”: It describes the campaign and the datasets, but not methods. In the beginning of chapter 3, the collocated criteria and filtering is described and in the “Statistical analysis” chapter, the bias and random error metrics are described. I would recommend to restructure these parts a bit. Either move the collocation criteria and bias metrics part in the Methods chapter, or just call the first one “Datasets and campaign”.
- L.282: What about MLEsub ? It is available at finer horiz res. ~ 18km and was introduced within B16 reprocessed dataset. But you are not using it in your study. If it is out of the scope of the study, it is fine, but you should at least mention it and add a comment in the discussions, that for certain atmospheric conditions (low-level variability), it could improve the agreement between ground-based and satellite based comparison.
- Line 334: It is mentioned that the Aeolus L2A quality assurance flag is not used, because it reduces the available data significantly. However, it is recommended by the L2A user guide to use the QC flag. How would the results look like with the applied QC? Is it possible to quantify, how much more data would be rejected? Otherwise I would recommend to add a comment that justifies that the quality assurance flag has not been used.
- Chapter 3.2 Statistical analysis: For the case study in 3.1 it is very suitable to include the numbers in the running text, but for the statistical analysis, it is too heavy loaded with numbers. I would like to suggest to put the most important numbers in a table (although I am not quite sure how to best realise it because you are talking about different altitude ranges) and rather use in the text “overestimates”, strongest by SCA-mid, lowest for AEL-PRO.
- Not mandatory, but a recommendation to make the article better structured: 9x2 figures in the statistical analysis, this is an overwhelming amount of figures. If you want to keep them all in the manuscript, I would suggest to make 3 figures, 3x2 each (first row backscatter versus altitude, second row backscatter versus OOD, third row scattering ratio), next figure for extinction, third for lidar ratio. That would make the discussion of the results more logical in my opinion. Alternatively, keep it, but remove the lidar ratio analysis for OOD and scattering ratio with the explanation that the bias and RMSE is too high for all algorithms, which makes it challenging to draw conclusions on impact of OOD and scattering ratio on the lidar ratio.
- Conclusions: I would like to see a more in-depth discussion of the results. What are the implications of the findings for the long-term aerosol research? Is Aeolus suitable to fill global aerosol datasets together with CALIOP and EarthCARE? How important would it be for an Aeolus-like future missions, to include a depolarisation channel? What are the main lessons learnt from this study? How representative is your comparison to other locations in the world and for other aerosol types?
Minor technical comments:
- “Aeolus like” > 23 matches in the manuscript > It should be “Aeolus-like”
- L.29: “The eVe lidar …” is a very long sentence. Try to split in two (the cross-polar component in the second sentence).
- L.36: “during the nearest“ Please specify the applied collocation criteria (within 100 km…)
- L.141: “the rest Aeolus algorithms” > rephrase to “the optimal estimation algorithms”
- L.199: The eVe lidar, deployed during the ASKOS/JATAC campaign, performed …
- L.205 : "14 collocated profiles". What about including profiles -1 and +1 in addition to direct overpass to consolidate the statistics ? Aeolus profiles may indeed be individually affected by low SNR and therefore discarded, especially in the lower troposphere. Maybe to be mentioned in the discussion of the results.
- Line 226: No need to mention the MCA, because you are not using this product. For your analysis, you don’t need to provide a full overview about the L2A product (This is done in the Flamant et al publications), only those parts you are using.
- L.277: add citation for L-BFGS-B c (Zhu et al., 1997)
- L.332: "Cloud flag from SCA algorithm". Should point ATBD to detail it (based on model ice and liquid water content), and coming with total cloud backscatter threshold of 1.0 × 10 ^-7 m^-1sr^-^1
- L.375: When you restrict xlim for Fig.3 and Fig.4 (for backscatter and extinction), this may help analysing the profile deviations.
- L.536: lidar ratio opposite behaviour as extinction and backscatter: decrease in lidar ratio bias and random error with increasing optical depth could be explained due to correlated errors in extinction and backscatter retrievals under strong attenuation, causing their ratio to appear more stable even as the individual quantities become less reliable. This might be a retrieval artefact.
- L.559: "cloud flagged bin from SCA". Like in line 332, should mention that it is a model based cloud screening.
- L.608: Best agreement for lidar ratio between Aeolus and eVe in high OOD > Same comment as in L.536: It’s certainly good to mention it, but you may add a comment that this behaviour should be analysed with care, because of potential lidar retrieval artefacts due to the correlated errors in extinction and backscatter.
Citation: https://doi.org/10.5194/egusphere-2025-1152-RC2 -
AC2: 'Reply on RC2', Peristera Paschou, 25 Jul 2025
Reply to Anonymous Reviewer #2
General comment:
This comprehensive study validates Aeolus Level 2A aerosol and cloud products, derived from three algorithms (SCA, MLE, AEL-PRO), against the ground-based eVe lidar during the ASKOS/JATAC campaign in Cabo Verde. It is the first manuscript to systematically compare all of the L2A algorithms using statistical analysis of backscatter, extinction coefficient, and lidar ratio across 14 collocated cases. The results show that overall errors are mainly due to random variability rather than systematic biases. The optimal estimation algorithms (MLE and AEL-PRO) outperform the standard SCA, particularly for the extinction and lidar ratio which are much more noisy compared to backscatter, with greater discrepancies at lower altitudes likely caused by atmospheric variability and signal attenuation.
The manuscript is well written, with a strong introduction that effectively places the study within the broader context. Also it nicely demonstrated the capabilities of Aeolus, the wind mission, being also suitable for aerosol research, with some limitations though. Furthermore, the study advertises the three Aeolus L2A algorithms and provides the readers some feelings about which algorithm performs good and not to good in different conditions.
The results section includes an overwhelming amount of numerical detail in the text, which makes it difficult to follow. A clearer structure with some numbers taken out in a table would improve readability. Additionally, the discussion lacks depth regarding the implications of the findings; a more thorough interpretation of the results would enhance the scientific impact. In the conclusion, rather than reiterating the results, it would be more valuable to provide insight into what these findings mean for past and future satellite missions, especially in view of long-term aerosol research.
I am confident that after some further effort in refining the discussions and conclusions, the manuscript’s impact will be significantly enhanced, thereby improving its scientific relevance for aerosol research. We therefore recommend the following minor revisions.
Author’s reply: We would like to thank the reviewer for their constructive comments. We acknowledge that, by addressing these comments and suggestions, the manuscript has been significantly improved. All points raised have been carefully considered, and the corresponding revisions and insertions have been made in the manuscript. A point-by-point response is provided below, along with and the revised manuscript showing track changes (attached pdf).
Detailed comments:
- Chapter “Methods and Datasets”: It describes the campaign and the datasets, but not methods. In the beginning of chapter 3, the collocated criteria and filtering is described and in the “Statistical analysis” chapter, the bias and random error metrics are described. I would recommend to restructure these parts a bit. Either move the collocation criteria and bias metrics part in the Methods chapter, or just call the first one “Datasets and campaign”.
Author’s reply: Chapter renamed to “Datasets and campaign”
- L.282: What about MLEsub ? It is available at finer horiz res. ~ 18km and was introduced within B16 reprocessed dataset. But you are not using it in your study. If it is out of the scope of the study, it is fine, but you should at least mention it and add a comment in the discussions, that for certain atmospheric conditions (low-level variability), it could improve the agreement between ground-based and satellite based comparison.
Author’s reply: Discussion about the MLEsub retrievals has been added after L282 and at ‘Conclusions’ section.
- Line 334: It is mentioned that the Aeolus L2A quality assurance flag is not used, because it reduces the available data significantly. However, it is recommended by the L2A user guide to use the QC flag. How would the results look like with the applied QC? Is it possible to quantify, how much more data would be rejected? Otherwise I would recommend to add a comment that justifies that the quality assurance flag has not been used.
Author’s reply: We decided not to include the Quality Check flag in our analysis, as i) it is available only for SCA and MLE retrievals resulting to imbalanced sample sizes between the SCA/MLE and the AEL-PRO and ii) the sample size is significantly reduced (mainly in the lower heights). A dedicated comment has been added in line 334 together with two indicative plots (as Appendix A) with the statistical results for the systematic errors for the backscatter and extinction while taking into account both the Cloud Mask and the Quality Check flags.
- Chapter 3.2 Statistical analysis: For the case study in 3.1 it is very suitable to include the numbers in the running text, but for the statistical analysis, it is too heavy loaded with numbers. I would like to suggest to put the most important numbers in a table (although I am not quite sure how to best realise it because you are talking about different altitude ranges) and rather use in the text “overestimates”, strongest by SCA-mid, lowest for AEL-PRO.
Author’s reply: Tables with the statistical results have been added as Appendix B.
- Not mandatory, but a recommendation to make the article better structured: 9x2 figures in the statistical analysis, this is an overwhelming amount of figures. If you want to keep them all in the manuscript, I would suggest to make 3 figures, 3x2 each (first row backscatter versus altitude, second row backscatter versus OOD, third row scattering ratio), next figure for extinction, third for lidar ratio. That would make the discussion of the results more logical in my opinion. Alternatively, keep it, but remove the lidar ratio analysis for OOD and scattering ratio with the explanation that the bias and RMSE is too high for all algorithms, which makes it challenging to draw conclusions on impact of OOD and scattering ratio on the lidar ratio.
Author’s reply: The figures have been re-organized. The lidar ratio plots for OOD and scattering ratio classes have been removed since the metrics exhibit extremely high values.
- Conclusions: I would like to see a more in-depth discussion of the results. What are the implications of the findings for the long-term aerosol research? Is Aeolus suitable to fill global aerosol datasets together with CALIOP and EarthCARE? How important would it be for an Aeolus-like future missions, to include a depolarisation channel? What are the main lessons learnt from this study? How representative is your comparison to other locations in the world and for other aerosol types?
Author’s Reply: The conclusions have been strongly modified, taking into account the related comments from Reviewer 2 (and Reviewer 1).
Minor technical comments:
- “Aeolus like” > 23 matches in the manuscript > It should be “Aeolus-like”
Author’s Reply: Change applied to all matches
- L.29: “The eVe lidar …” is a very long sentence. Try to split in two (the cross-polar component in the second sentence).
Author’s Reply: Revised. Sentence split in two.
- L.36: “during the nearest“ Please specify the applied collocation criteria (within 100 km…)
Author’s Reply: Revised. Collocation criteria added in the sentence.
- L.141: “the rest Aeolus algorithms” > rephrase to “the optimal estimation algorithms”
Author’s Reply: Rephrased
- L.199: The eVe lidar, deployed during the ASKOS/JATAC campaign, performed …
Author’s Reply: Revised by removing the “deployed during the ASKOS/JATAC campaign”.
- L.205 : "14 collocated profiles". What about including profiles -1 and +1 in addition to direct overpass to consolidate the statistics ? Aeolus profiles may indeed be individually affected by low SNR and therefore discarded, especially in the lower troposphere. Maybe to be mentioned in the discussion of the results.
Author’s Reply: We only considered the nearest Aeolus profile, because this guarantees that the atmospheric related error (due to inhomogeneities and small-scale aerosol structures and gradients that may occur, mainly, inside the PBL) is minimum and random for the characterization of the systematic error. As such, by including also ±1 profiles from the nearest one, we would risk favoring specific scenes (where more BRCs would be taken into account based on the spatial collocation criteria of 100km radius) which would affect the calculation of the random errors. A dedicated discussion has been added in section 4 (Discussion and conclusions).
- Line 226: No need to mention the MCA, because you are not using this product. For your analysis, you don’t need to provide a full overview about the L2A product (This is done in the Flamant et al publications), only those parts you are using.
Author’s Reply: The MCA algorithm has been mentioned in the concept of giving a historical overview of the available Aeolus algorithms. The authors believe that mentioning MCA in Line 226 should not be removed, but the brief discussion about MCA in Lines 234-242 has been removed.
- L.277: add citation for L-BFGS-B c (Zhu et al., 1997)
Author’s Reply: Citation added.
- L.332: "Cloud flag from SCA algorithm". Should point ATBD to detail it (based on model ice and liquid water content), and coming with total cloud backscatter threshold of 1.0 × 10 ^-7 m^-1sr^-^1
Author’s Reply: Short discussion on the SCA cloud flag has been added, citing the ATBD (Flamant et al., 2022) for more details.
- L.375: When you restrict xlim for Fig.3 and Fig.4 (for backscatter and extinction), this may help analysing the profile deviations.
Author’s Reply: The limits of x-axis of all affected figures have been updated accordingly.
- L.536: lidar ratio opposite behaviour as extinction and backscatter: decrease in lidar ratio bias and random error with increasing optical depth could be explained due to correlated errors in extinction and backscatter retrievals under strong attenuation, causing their ratio to appear more stable even as the individual quantities become less reliable. This might be a retrieval artefact.
Author’s Reply: The reviewer is right, about the correlated errors and lidar ratio retrieval, but in any case, the errors are extremely high to make any conclusions. Moreover, there is similar comment from Reviewer 1. As such, the related figures (12 and 13) have been removed and a brief discussion about the high variability of the lidar ratio errors per bin has been added in line 478.
- L.559: "cloud flagged bin from SCA". Like in line 332, should mention that it is a model based cloud screening.
Author’s Reply: Revised as in the comment at Line 332
- L.608: Best agreement for lidar ratio between Aeolus and eVe in high OOD > Same comment as in L.536: It’s certainly good to mention it, but you may add a comment that this behaviour should be analysed with care, because of potential lidar retrieval artefacts due to the correlated errors in extinction and backscatter.
Author’s Reply: Since the errors for lidar ratio are extremely large also in the analysis for different OOD and scattering ratio classes, figures 12 and 13 along with the extended discussion have been removed from the manuscript and a short discussion has been added instead, highlighting the low accuracy and high variability of the Aeolus lidar ratio retrievals.
Citation: https://doi.org/10.5194/egusphere-2025-1152-AC2
Data sets
eVe dataset in the ASKOS Campaign Dataset V. Amiridis et al. https://evdc.esa.int/publications/askos-campaign-dataset/
Aeolus Level 2A - Baseline 16 European Space Agency https://aeolus-ds.eo.esa.int/oads/access/collection/Level_2A_aerosol_cloud_optical_products_Reprocessed
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
532 | 53 | 25 | 610 | 26 | 39 |
- HTML: 532
- PDF: 53
- XML: 25
- Total: 610
- BibTeX: 26
- EndNote: 39
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
The manuscript by Paschou et al. provides a detailed insight into the validation of Aeolus spin-off concerning aerosol optical properties products (4 different products algorithms). For this purpose, a dedicated ground-based reference instrument was operated on the Cabo Verdean islands in the tropical Atlantic. The authors could accomplish 14 direct overpasses, which is remarkable given the repeat cycle of 7-days and the remote location. The authors discuss in detail one case study and perform a statistical analysis, for which they retrieve statistical metrics even though the data set is sparse with 14 overpasses and thus not fully significant for the overall Aeolus performance in terms of region, time, and aerosol conditions.
Nevertheless, it is the optimum approach given the data set and valuable for the community. Furthermore, they investigate their findings for additional boundary conditions like the overlaying optical depth and scattering ratio which is indeed of great interest. For this reason, the paper is in general well suited for AMT and this special issue. Nevertheless, I propose minor revision as I have some general comments which should be considered and a lot of minor, technical comments in the attached pdf.
General comments:
- Despite the great observations and general results, the publication lacks in my opinion a proper discussion of the results. The current manuscript stays mostly on reporting numbers of the comparison which is not appropriate for a reader not being a validation expert. At least an attempt should be made to contextualize the results in a broader view.
For example, please try to conclude, based on your analysis, which of the 4 algorithms is performing best. If none can be identified, and each has its pros and cons, this is also a conclusion. Also, if there is one algorithm you cannot recommend for specific aerosol studies. Basically, you operated the only one system measuring the same properties like Aeolus in the same viewing angle and thus you are allowed to make clear statements even though, of course, such statements might not be valid globally, for all baselines, and all Aeolus lifetime.
- In this context, the conclusion really needs to be overworked. Currently it is just a summary or repetition of the chapter above but not concluding. Please only highlight in the conclusion the important findings in a short and concise way and discuss them wrt the performance of Aeolus so that one can draw conclusions easily. Currently only the very last passage is a real conclusion.
- As stated, for the reader it would be beneficial to get some interpretation of the results guiding towards the performance of Aeolus. I know it is difficult and it might be only valid for the Cabo Verde Island areas, but still worth to state something like, based on our statistical analysis, studies of the dust layer can be performed using the xx algorithm when interested in layer heights but algorithms yyy seems to be more suited for the study of optical properties.
- I am not happy about the frequent and multiple use of the word “bias” which is partly incorrect. If you have a deviation of the retrieved values for a case study, you cannot claim it as a bias already. In fact, you could only claim a deviation when considering the uncertainty of both systems. A bias itself can only be provided, if you have a sufficient statistical analysis. Please correct this throughout the manuscript.
- I am really puzzled by the fact that you report no bias for the lidar ratio in cases with high atmospheric load (OOD 0.73 – 1.19 and scattering ratio 1.27 – 1.9). Either, the result is completely not statically relevant due the large uncertainty in bsc and ext as reported just before and just a coincidence. In this case you should leave the statement out. Or it is the fact that it is an intensive property, and thus, for the same aerosol type, spatial variability cancels out. Some discussion about this would be appreciated.
- Concerning the findings in the PBL, conclusions on the representativeness especially with regard to the coarse Aeolus resolution would be very welcome also because a follow-on mission is planned.
- In the introduction, many terms are used which are just defined later and thus it is partly not understandable for a non-expert. Please change this.
- I personally think that the nomenclature SCA-Rayleigh-bin is misleading. Maybe SCA-single-bin might be more appropriate?
- Please use a unique nomenclature throughout the manuscript. E.g. mean bias is multiple defined. Which is really confusing.
- Generally, please explain all abbreviations when used the first time.
- Furthermore, please check all your plots for color blind conformity. I think green, red, orange and brown are one of the worst-scenario colors for some people.
More specific comments are provided in the pdf.