the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Synergetic Retrieval from Multi-Mission Spaceborne Measurements for Enhancement of Aerosol and Surface Characterization
Abstract. Atmospheric aerosol is one of the main drivers of climate change. At present time there are a number of different satellites on Earth orbit dedicated to aerosol studies. Due to limited information content, the main aerosol products of the most of satellite missions is AOD (Aerosol Optical Depth) while the accuracy of aerosol size and type retrieval from spaceborne remote sensing still requires essential improvement. The combination of measurements from different satellites essentially increases their information content and, therefore, can provide new possibilities for retrieval of extended set of both aerosol and surface properties.
A generalized synergetic approach for aerosol and surface characterization from diverse spaceborne measurements was developed on the basis of GRASP (Generalized Retrieval of Atmosphere and Surface Properties) algorithm (SYREMIS/GRASP approach). The concept was applied and tested on two types of synergetic measurements: (i) synergy from polar orbiting satellites (LEO+LEO synergy), (ii) synergy from polar orbiting and geostationary satellites (LEO+GEO synergy). On one hand such synergetic constellation extends the spectral range of the measurements. On the other hand, it provides unprecedented global spatial coverage with high temporal resolution, which is crucial for number of climate studies.
In this paper we discuss the physical basis and concept of the LEO+LEO and LEO+GEO synergies used in GRASP retrieval and demonstrate that SYREMIS/GRASP approach allows the transition of information content from the instruments with the richest information content to the instruments with lower one. This results in the substantial enhancements in aerosol and surface characterizations for all instruments from the synergy.
Competing interests: Some authors are members of the editorial board of journal AMT.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.- Preprint
(11965 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-1536', Anonymous Referee #1, 09 May 2025
The paper presents a validation of several implementations of the GRASP algorithm for the retrieval of aerosol optical depth (AOD), single-scattering albedo (SSA), Angstrom exponent, and surface bidirectional reflectance function (BRDF). The focus is two synergistic retrievals: one combining TROPOMI and OLCI with a second scheme further including AHI. Their results are validated against AERONET, demonstrating that the 3-sensor approach out-performs the 2-sensor one, which itself out-performs single-sensor analyses. A less detailed comparison of BRDF is presented against MODIS.
This work is the sort of validation study that every algorithm team should publish from time to time. It should be accepted after some minor corrections.
- The figure captions are insufficient. Given GRASP’s popularity, this might be the first time someone ever encounters an aerosol validation and we should try to be welcoming. Those that begin “The same as” are fine, but the remainder assume the reader is familiar with the standard validation plots of aerosol retrieval methods. Fig. 2 should explain (i) what the annotation provides, (ii) what the grey envelope denotes, (iii) what the colour represents, (iv) what AERONET is given that it’s never introduced or cited.
- I also remind the authors that use of the rainbow colour map is discouraged for reasons eloquently explained in doi:10.1038/s41467-020-19160-7.
- For a paper that sets out to “discuss the physical basis and concept” of its retrievals, there is minimal description of the actual algorithm beyond Tables 2-4. It would be impossible for a PhD student to implement the technique introduced from this paper alone. I know that the GRASP method is extremely thoroughly documented, and that those papers are already cited, but the authors could provide slightly more guidance to the unfamiliar reader in the paragraph of lines 113-119. Something like, “An outline of the general infrastructure for GRASP is provided in XXX, with specific details as to the aerosol model approach in YYY and data harmonisation methods in ZZZ; examples and tutorials can be found at grasp-earth.com.”
- Further to that point, it would be useful to know a little more about how the decision-making process behind section 2.2 beyond “A number of extensive case studies were performed to identify the most optimal retrieval setup.” I expect that this was trial-and-error (which is fine) but it’d be useful to know what you were looking for in order to understand how these weights should be interpreted in future. What were you trying to optimize (e.g. best correlation with AERONET, smallest residuals, spatially coherent fields, minimal processing time, results that ‘looked right’, eliciting minimal complaints from ESA technical officers)? Why did you pick the values of weight you did (i.e. are they similar to the expected uncertainty or were they convenient round numbers)?
- In lines 213-214, the terms “weighting” and “standard deviation” appear to be used as synonyms. On line 201, they appear to mean different things (SD being the variation of data going into the harmonization and W being the value given to the retrieval code to use within a covariance matrix). Please check this section to make sure you are being consistent.
- The wording of lines 254-259 has confused me. You say that the combination of three instruments “contains more information about temporal variability”, but I thought that the opposite was the case? As more instruments are added to each harmonized “pixel”, that pixel represents a greater window of time and so contains less information about temporal variability because it is smoothing over a longer duration. Thus, the smoothness constraint becomes smaller because the expected covariance of subsequent pixels has decreased. I could be entirely wrong here, as I think in covariances rather than in smoothness constraints which may be misleading me.
- It is nice to see a validation of BRDF in section 3.3 as this is commonly overlooked despite most aerosol retrievals considering it to some extent. However, the discussion is rather unsatisfying as Figs. 17-19 exhibit fairly substantial differences between GRASP and MODIS without commentary. I disagree with line 398 that the retrievals are “very similar”. They’re qualitatively similar, but GRASP is much less spatially complete and exhibits differences to MODIS of sufficient magnitude to be relevant and that correlate with surface types. As BRDF is not the focus of this team, I’m not asking for a robust validation but, at a minimum, Fig. 19 deserves more discussion. GRASP is producing a much wider range of values and a qualitative comment upon whether the authors believe their BRDF is better or worse than MODIS would be interesting, if only to inform data users as to whether the team thinks there is any scientific merit in the product.
- Also, on lines 405 and 456, you state that the MODIS BRDF is a one-angle observation. When one refers to “MODIS BRDF”, I think of MCD43A1, which is based on observations from a 16-day window in order to capture a range of angles. There is surface reflectance in the MOD04/MYD04, but that isn’t presented in terms of the Ross-Li kernels. The authors should specify which product they are comparing against and, if it is MCD43, describe it appropriately.
- There is no acknowledgement for the MODIS data used. I believe all of the datasets now have a DOI.
- At line 433, my gut instinct is that TROPOMI provides the most information, rather than the richest information, as a greater number of channels are utilised. To comment on the richness of the information would require considering, say, the number of degrees of freedom per input channel. (This may very well be highest for TROPOMI as it has good uncertainty characteristics, but that isn’t examined in this manuscript.)
The paper’s weakest area is its language, which was difficult for this native speaker to read. It is technically correct but uses an unusual syntax that took some getting used to. A number of corrections are offered in the attached PDF but there are two recurring issues that warrant mention here.
- I am unfamiliar with the use of “essentially” in this paper. It appears to be used where “significantly” or (better) “substantially” would be.
- “The” is frequently used incorrectly. I admit that the rules for “the” are difficult to explain, but it usually refers to something singular or unique: the GRASP algorithm is different to an aerosol retrieval while the MODIS dataset is different to a datapoint. A copy-editor would be exceedingly useful in this regard as I didn’t catch them all.
-
AC1: 'Reply on RC1', Pavel Litvinov, 29 Aug 2025
Reviewer1 (Reply to the comments marked in red colour in the "Supplement" PDF file)
The paper presents a validation of several implementations of the GRASP algorithm for the retrieval of aerosol optical depth (AOD), single-scattering albedo (SSA), Angstrom exponent, and surface bidirectional reflectance function (BRDF). The focus is two synergistic retrievals: one combining TROPOMI and OLCI with a second scheme further including AHI. Their results are validated against AERONET, demonstrating that the 3-sensor approach out-performs the 2-sensor one, which itself out-performs single-sensor analyses. A less detailed comparison of BRDF is presented against MODIS.
This work is the sort of validation study that every algorithm team should publish from time to time. It should be accepted after some minor corrections.
Response:
We sincerely appreciate the reviewer for very valuable feedback, comments, and corrections on the English language of the manuscript. The main purpose of the paper is to present the physical basis and concept of the developed synergetic approach, which was implemented into the GRASP algorithm. The validation results were presented only to prove the concept and to demonstrate the new possibilities of the SYREMIS/GRASP approach. To emphasize the main purpose of the paper, the paper was modified correspondingly.
- The figure captions are insufficient. Given GRASP’s popularity, this might be the first time someone ever encounters an aerosol validation and we should try to be welcoming. Those that begin “The same as” are fine, but the remainder assume the reader is familiar with the standard validation plots of aerosol retrieval methods. Fig. 2 should explain (i) what the annotation provides, (ii) what the grey envelope denotes, (iii) what the colour represents, (iv) what AERONET is given that it’s never introduced or cited.
- I also remind the authors that use of the rainbow colour map is discouraged for reasons eloquently explained in doi:10.1038/s41467-020-19160-7.
Response:
Many thanks for pointing this out. We have updated the validation plots throughout the manuscript. Fig. 4 caption and Section 2.4 explain all the details of the scatter plot and histograms: the correlation statistics in the legends, the grey envelope, the color of each data point, AERONET reference, etc. The x-axis and y-axis scales of the scatter plots and the x-axis width of the histograms are adjusted for better visualization.
- For a paper that sets out to “discuss the physical basis and concept” of its retrievals, there is minimal description of the actual algorithm beyond Tables 2-4. It would be impossible for a PhD student to implement the technique introduced from this paper alone. I know that the GRASP method is extremely thoroughly documented, and that those papers are already cited, but the authors could provide slightly more guidance to the unfamiliar reader in the paragraph of lines 113-119. Something like, “An outline of the general infrastructure for GRASP is provided in XXX, with specific details as to the aerosol model approach in YYY and data harmonisation methods in ZZZ; examples and tutorials can be found at grasp-earth.com.”
Response:
The section 2 “SYREMIS/GRASP synergetic concept” is revised and extended considerably with a more detailed description of the synergetic measurements preparation, forward models, instrument “weighting” approach, and a priori constraints in the GRASP algorithm used for the SYREMIS approach.
- Further to that point, it would be useful to know a little more about how the decision-making process behind section 2.2 beyond “A number of extensive case studies were performed to identify the most optimal retrieval setup.” I expect that this was trial-and-error (which is fine) but it’d be useful to know what you were looking for in order to understand how these weights should be interpreted in future. What were you trying to optimize (e.g. best correlation with AERONET, smallest residuals, spatially coherent fields, minimal processing time, results that ‘looked right’, eliciting minimal complaints from ESA technical officers)? Why did you pick the values of weight you did (i.e. are they similar to the expected uncertainty or were they convenient round numbers)?
Response:
The new section 2.4 “Remote sensing tests to optimize synergetic retrieval" is introduced. It contains all information regarding tests performed to validate and optimize the synergetic approach. The section describes which validation datasets were used for these purposes and what the criteria were to select the best approach.
- In lines 213-214, the terms “weighting” and “standard deviation” appear to be used as synonyms. On line 201, they appear to mean different things (SD being the variation of data going into the harmonization and W being the value given to the retrieval code to use within a covariance matrix). Please check this section to make sure you are being consistent.
Response:
Updated Sections 2.2, 2.3, and added Appendix A describes the “weighting” due to measurements and a “weighting” due to a priori datasets (smoothness constraints) in the GRASP algorithm and how they were used in SYREMIS synergy (section 2.2 - 2.4, Appendix A).
- The wording of lines 254-259 has confused me. You say that the combination of three instruments “contains more information about temporal variability”, but I thought that the opposite was the case? As more instruments are added to each harmonized “pixel”, that pixel represents a greater window of time and so contains lessinformation about temporal variability because it is smoothing over a longer duration. Thus, the smoothness constraint becomes smaller because the expected covariance of subsequent pixels has decreased. I could be entirely wrong here, as I think in covariances rather than in smoothness constraints which may be misleading me.
Response:
Speaking about “more information about temporal variability” from the combination of the instrument, we mean the information from the synergetic measurements themselves. For example, for any colocated pixel from 3 different LEO+LEO satellites, there may be a few measurements at different times within one day: S3A/OLCI, S3B/OLCI, and S5P/TROPOMI. Therefore, the synergetic measurements provide much better information about the temporal variability of aerosol and surface properties than the single instruments. These multi-temporal synergetic measurements are input for SYREMIS/GRASP. With properly adjusted temporal thresholds and “multi-temporal” smoothness constraints (Sections 2.3 and 2.4) this results in the consistent retrieval of temporal dependences of the aerosol and surface characteristics. To describe this in more detail, we updated the description of the synergetic data preparation, introduced the concept of the spatial-temporal dataset block, and provided more details on how these not colocated in time measurements are treated in GRASP with temporal thresholds and a priori temporal smoothness constraints (Section 2).
- It is nice to see a validation of BRDF in section 3.3 as this is commonly overlooked despite most aerosol retrievals considering it to some extent. However, the discussion is rather unsatisfying as Figs. 17-19 exhibit fairly substantial differences between GRASP and MODIS without commentary. I disagree with line 398 that the retrievals are “very similar”. They’re qualitatively similar, but GRASP is much less spatially complete and exhibits differences to MODIS of sufficient magnitude to be relevant and that correlate with surface types. As BRDF is not the focus of this team, I’m not asking for a robust validation but, at a minimum, Fig. 19 deserves more discussion. GRASP is producing a much wider range of values and a qualitative comment upon whether the authors believe their BRDF is better or worse than MODIS would be interesting, if only to inform data users as to whether the team thinks there is any scientific merit in the product.
- Also, on lines 405 and 456, you state that the MODIS BRDF is a one-angle observation. When one refers to “MODIS BRDF”, I think of MCD43A1, which is based on observations from a 16-day window in order to capture a range of angles. There is surface reflectance in the MOD04/MYD04, but that isn’t presented in terms of the Ross-Li kernels. The authors should specify which product they are comparing against and, if it is MCD43, describe it appropriately.
- There is no acknowledgement for the MODIS data used. I believe all of the datasets now have a DOI.
Response:
For the global intercomparison with the SYREMIS Ross-Li BRDF model parameters, the MCD43C1 dataset was used. This is explained in Sec.3.3, with reference DOI provided. Indeed, the MCD43C1 daily BRDF product was produced using 16 days of Terra and Aqua MODIS data covering a range of scattering angles.
In the BRDF intercomparison maps, SYREMIS and MODIS BRDF maps differ in spatial completeness over the Amazon and high-latitude regions mainly due to differences in cloud/snow masks for TROPOMI/OLCI and MODIS. Aggressive cloud/snow masking was employed for TROPOMI; this may remove more pixels compared to MODIS, especially in regions such as Amazon, high-latitude regions, and Tibet. In the revised manuscript, we provided more discussion in Section 3.4 about the differences in the global distribution of AOD and BRDF.
Overall, the stronger variability of the second and third BRDF parameters from SYREMIS can be explained by the pseudo multi-angular measurements in the synergetic retrieval with much more angular information (up to 150 accumulated measurements in LEO+LEO synergy within 1 month) than from any of the single instruments with one observation angle per measurement (MODIS, S5P/TROPOMI, S3A/OLCI or S3B/OLCI, etc.), even though the 16-day aggregated retrieval method was used for the MCD43C1 product. Due to this fact, SYREMIS synergy measurements provide more information about the surface angular reflectance properties, which, we think, results in better characterization of BRDF parameters representing surface angular dependence.
A separate manuscript, currently in preparation, will present a global intercomparison between SYREMIS and reference satellite surface products, including MODIS and VIIRS, with an in-depth analysis of their differences.- At line 433, my gut instinct is that TROPOMI provides the most information, rather than the richest information, as a greater number of channels are utilised. To comment on the richness of the information would require considering, say, the number of degrees of freedom per input channel. (This may very well be highest for TROPOMI as it has good uncertainty characteristics, but that isn’t examined in this manuscript.)
Response:
TROPOMI provides more information due to both a wider range of spectral measurements (from UV to SWIR) and a much wider swath than OLCI, for example. In the paper, we used "richest information” as a synonym for “the most information”. Indeed, to avoid confusion with information content analysis, we will use the phrase “the most information” in the revised manuscript.
The paper’s weakest area is its language, which was difficult for this native speaker to read. It is technically correct but uses an unusual syntax that took some getting used to. A number of corrections are offered in the attached PDF but there are two recurring issues that warrant mention here.
- I am unfamiliar with the use of “essentially” in this paper. It appears to be used where “significantly” or (better) “substantially” would be.
- “The” is frequently used incorrectly. I admit that the rules for “the” are difficult to explain, but it usually refers to something singular or unique: the GRASP algorithm is different to an aerosol retrieval while the MODIS dataset is different to a datapoint. A copy-editor would be exceedingly useful in this regard as I didn’t catch them all.
Response:
We are very grateful to the reviewer for the corrections he kindly made in the paper. All of them were accounted for, and the English language was improved.
- The figure captions are insufficient. Given GRASP’s popularity, this might be the first time someone ever encounters an aerosol validation and we should try to be welcoming. Those that begin “The same as” are fine, but the remainder assume the reader is familiar with the standard validation plots of aerosol retrieval methods. Fig. 2 should explain (i) what the annotation provides, (ii) what the grey envelope denotes, (iii) what the colour represents, (iv) what AERONET is given that it’s never introduced or cited.
- The figure captions are insufficient. Given GRASP’s popularity, this might be the first time someone ever encounters an aerosol validation and we should try to be welcoming. Those that begin “The same as” are fine, but the remainder assume the reader is familiar with the standard validation plots of aerosol retrieval methods. Fig. 2 should explain (i) what the annotation provides, (ii) what the grey envelope denotes, (iii) what the colour represents, (iv) what AERONET is given that it’s never introduced or cited.
-
RC2: 'Comment on egusphere-2025-1536', Anonymous Referee #2, 09 Jun 2025
While satellite-based remote sensing provides good estimates of the aerosol optical depth (AOD), the same is not true of other aerosol parameters. The authors have developed an algorithm to use a combination of spaceborne measurements to improve aerosol (and surface) characterization.
The basic idea is that these measurements encompass some or all of the following: (1) a range of scattering angles (enabling observation of differences in angular dependence of aerosol and surface signals); (2) a wide spectral range (enabling observation of differences in spectral dependence of aerosol and surface signals); (3) polarimetry (enabling observation of differences in the polarization signatures of aerosol and surface signals, which relate to aerosol microphysical properties); and (4) high temporal resolution (enabling observation of the temporal variability of aerosol properties and differences in aerosol and surface signals over the relevant time period).
In particular, existing algorithms are unable to handle observations that are not collocated in time.
The authors take advantage of the fact that aerosol properties show temporal and spatial correlations. Their new algorithm (called SYREMIS) is generic and can be applied to a variety of satellite observations. They explore both LEO-LEO and LEO-GEO synergy. For the former, they test their algorithm on combined measurements from S5P/TROPOMI, S3A/OLCI and S3B/OLCI. For the latter, they apply their algorithm to S5P/TROPOMI, S3A/OLCI, S3B/OLCI and Himawari-8/AHI measurements.
The theoretical basis for this work is good, which is a strength of the study. However, there are missing aspects in the manuscript. In particular, the results do not adequately demonstrate the concept. There is not enough explanation of why the results are different between the existing methods and the proposed new approach, or why the authors believe the synergistic product is more accurate. Further, as written, the manuscript reads more like a news report than a research article. The results are described without any discussion of the broader scientific principles or implications. For a journal paper, it is crucial to go beyond descriptive reporting and provide insights that can be generalized or that significantly advance the understanding of the topic. Finally, there are several instances of poor grammar and typos. I recommend the paper to be carefully proofread. I recommend a major revision addressing all these issues before the paper is reconsidered for publication.
Specific Comments:
Line 174: 21 bands -> 24 bands
Table 2: What are the 19(24) spectral bands used in the LEO-LEO(LEO-GEO) synergy? The specific wavelengths need to be mentioned, especially because Lines 194-199 indicate that not all individual measurement bands were used.
Lines 198-199: “Accounting for the differences in the calibration and spectral bandwidth in GRASP algorithm is realized with application of the different requirements on the standard deviation of measurements fitting for the different spectral bands.” What does this mean?
Section 2.2: The description of how the weighting of the different measurements is done is very unclear. Given that the weighting is one of the significant innovations in this work, this is a major drawback. It would help to have an equation outlining this process and some text explaining how the parameters are chosen, along with a clear description of the rationale. This would also help clarify what the parameters in Table 3 mean.
Lines 249-253: “In particular, in the SYREMIS/GRASP processing the surface properties are considered to be the constant within +/-6h over land and +/-0.5h over ocean (“Temporal threshold on surface variability” in Tables 3 and 4). For the vertical distribution of aerosol concentration, the temporal threshold +/- 3h over land +/-0.5h over ocean was applied (“Aerosol scale height variability” in Tables 3 and 4).” What is the rationale for selecting these values? As the authors correctly mention, correct selection of these constraints is crucial when the measurements are not coincident. However, there is no explanation of how they arrived at these values. There is some description of the constraints being relaxed compared to those for single instruments, but there needs to be a more detailed explanation of the rationale. Further, why are the constraints more relaxed for LEO-GEO than LEO-LEO?
Figure 3: After harmonization, instrument weighting and retrieval setup optimization, it seems that individual instruments perform as well as the combination. If this is true, then what is the point of using the combination? If not, the advantages should be clearly explained.
Lines 275-276: “The validation criteria are the same as was used for TROPOMI/GRASP retrieval evaluation (Litvinov et al., 2024; Chen et al., 2024a).” Even though the validation criteria have been discussed in detail in other papers, it would be useful to the readers to have a summary here.
Lines 280-281: The authors use the phrase “instrument extracted from the synergy”. How is this different from using the measurements from a single instrument? For example, how are SYREMIS/TROPOMI and SYREMIS/OLCI different from GRASP/TROPOMI and GRASP/OLCI?
Figure 4 seems to suggest that TROPOMI alone performs better than using all instruments together. Why combine the instruments then? Also, OLCI results seem to be much worse than those from TROPOMI. What advantage does using OLCI measurements provide?
The same is true for the AE and SSA results (Figures 5 and 6). In fact, for the AE, the results are very different from AERONET results. I actually do not see much of a use for satellite-derived results, single or combined. Similar comments apply to Figures 9, 10 and 11. The LEO-GEO combination (Figures 12-14) seems to suffer from similar issues, with Himawari providing almost all the information in that case and the other instruments having negligible contributions. In Figure 15 (that is not referenced in the text), what do SYREMIS/TROPOMI LEO+GEO and SYREMIS TROPOMI LEO+LEO represent? What instruments are covered in these combinations?
Figure 8: What does QA>=2 mean? The meaning of that expression needs to be clarified. Also, it seems that the performance of SYREMIS, compared with GRASP, is better over land. That contradicts the authors’ claim that the synergistic retrieval is better than the retrieval from individual instruments.
Lines 393-394: “One can see from Fig. 16 that, overall, SYREMIS/GRASP AOD retrieval corresponds well to VIIRS, MODIS, TROPOMI/GRASP and OLCI/GRASP products.” I disagree. It seems to me that TROPOMI/GRASP results agree well with VIIRS results, but SYREMIS results differ considerably, especially over the Sahara and the Middle East (bright surfaces?).
Citation: https://doi.org/10.5194/egusphere-2025-1536-RC2 -
AC2: 'Reply on RC2', Pavel Litvinov, 29 Aug 2025
Reviewer 2 (The replies to the comments are marked in red colour in Supplement PDF file)
While satellite-based remote sensing provides good estimates of the aerosol optical depth (AOD), the same is not true of other aerosol parameters. The authors have developed an algorithm to use a combination of spaceborne measurements to improve aerosol (and surface) characterization.
The basic idea is that these measurements encompass some or all of the following: (1) a range of scattering angles (enabling observation of differences in angular dependence of aerosol and surface signals); (2) a wide spectral range (enabling observation of differences in spectral dependence of aerosol and surface signals); (3) polarimetry (enabling observation of differences in the polarization signatures of aerosol and surface signals, which relate to aerosol microphysical properties); and (4) high temporal resolution (enabling observation of the temporal variability of aerosol properties and differences in aerosol and surface signals over the relevant time period).
In particular, existing algorithms are unable to handle observations that are not collocated in time.
The authors take advantage of the fact that aerosol properties show temporal and spatial correlations. Their new algorithm (called SYREMIS) is generic and can be applied to a variety of satellite observations. They explore both LEO-LEO and LEO-GEO synergy. For the former, they test their algorithm on combined measurements from S5P/TROPOMI, S3A/OLCI and S3B/OLCI. For the latter, they apply their algorithm to S5P/TROPOMI, S3A/OLCI, S3B/OLCI and Himawari-8/AHI measurements.
The theoretical basis for this work is good, which is a strength of the study. However, there are missing aspects in the manuscript. In particular, the results do not adequately demonstrate the concept. There is not enough explanation of why the results are different between the existing methods and the proposed new approach, or why the authors believe the synergistic product is more accurate. Further, as written, the manuscript reads more like a news report than a research article. The results are described without any discussion of the broader scientific principles or implications. For a journal paper, it is crucial to go beyond descriptive reporting and provide insights that can be generalized or that significantly advance the understanding of the topic. Finally, there are several instances of poor grammar and typos. I recommend the paper to be carefully proofread. I recommend a major revision addressing all these issues before the paper is reconsidered for publication.
Response:
The authors thank the reviewer for his critical but very valuable comments. The missing aspects, pointed out by the reviewer, were carefully considered, and corresponding corrections were performed. Overall, this includes:
- More detailed description of the physical basis of the synergetic approach, such as data preparation for synergy, forward modelling, GRASP algorithm inversion approach for SYREMIS synergy, and multi-pixel synergetic concept. We also emphasized the differences and advantages in comparison to the existing methods (Sections 1, 2).
- The results are presented in more detail, showing advantages of the synergetic approach (Sections 2.4 and 3)
- The scientific discussion is substantially extended throughout the manuscript.
- The paper was carefully reviewed, and the English language was improved.
Specific Comments:
Line 174: 21 bands -> 24 bands
Response:
Corrected.
Table 2: What are the 19(24) spectral bands used in the LEO-LEO(LEO-GEO) synergy? The specific wavelengths need to be mentioned, especially because Lines 194-199 indicate that not all individual measurement bands were used.
Response:
In our study, we used specific spectral bands from OLCI, TROPOMI, and AHI sensors for the LEO–LEO and LEO–GEO synergy retrievals. We combined 9 bands from OLCI and 10 bands from TROPOMI, resulting in a total of 19 spectral bands: 340, 367, 380, 412.5, 416, 440, 442.5, 490, 494, 510, 560, 665, 670, 747, 753, 772, 865, 1020, and 2313 nm. For the LEO–GEO synergy, we combined 9 OLCI bands, 10 TROPOMI bands, and 6 AHI bands, accounting for the common 510 nm band shared between TROPOMI and AHI. This results in a total of 24 spectral bands: 340, 367, 380, 412.5, 416, 440, 442.5, 470, 490, 494, 510, 560, 639.1, 665, 670, 747, 753, 772, 856.7, 865, 1020, 1610.1, 2256.8, and 2313 nm. Table 2 is updated correspondingly.
Lines 198-199: “Accounting for the differences in the calibration and spectral bandwidth in GRASP algorithm is realized with application of the different requirements on the standard deviation of measurements fitting for the different spectral bands.” What does this mean?
Response:
Section 2 was substantially revised, and Appendix A was added to describe the relation between “the standard deviation of measurement fitting” and measurement “weighting”.
Section 2.2: The description of how the weighting of the different measurements is done is very unclear. Given that the weighting is one of the significant innovations in this work, this is a major drawback. It would help to have an equation outlining this process and some text explaining how the parameters are chosen, along with a clear description of the rationale. This would also help clarify what the parameters in Table 3 mean.
Response:
Updated Sections 2.2, 2.3, and added Appendix A describes the “weighting” due to measurements and a “weighting” due to a priori datasets (smoothness constraints) in GRASP algorithm and how they were used in SYREMIS synergy (section 2.2 - 2.4, Appendix A).
Lines 249-253: “In particular, in the SYREMIS/GRASP processing the surface properties are considered to be the constant within +/-6h over land and +/-0.5h over ocean (“Temporal threshold on surface variability” in Tables 3 and 4). For the vertical distribution of aerosol concentration, the temporal threshold +/- 3h over land +/-0.5h over ocean was applied (“Aerosol scale height variability” in Tables 3 and 4).” What is the rationale for selecting these values? As the authors correctly mention, correct selection of these constraints is crucial when the measurements are not coincident. However, there is no explanation of how they arrived at these values. There is some description of the constraints being relaxed compared to those for single instruments, but there needs to be a more detailed explanation of the rationale. Further, why are the constraints more relaxed for LEO-GEO than LEO-LEO?
Response:
BRDF parameters are defined by the intrinsic properties of the surface, like surface type, reflecting properties, topology etc, which are very stable in time. Therefore, land surface BRDF parameters do not change considerably during a day. The +/-6h threshold applied to BRDF parameters in the SYREMIS approach allows using almost the same BRDF parameters for all synergetic measurements during the day. This threshold can change depending on the pixel latitude due to different satellite overpass times at low and high latitudes. For the considered satellite synergetic constellation, +/-6h gives a 12h interval, which is sufficient to account for the stable surface within the day globally. The BRDF parameters' variability from day to day is controlled by surface temporal smoothness constraints. Both the temporal thresholds and smoothness constraints substantially increase the number of pseudo-multi-angular measurements in the synergy, which is crucial for BRDF parameters retrieval and discrimination of atmosphere and surface signals.
Overall, Section 2 was revised, providing the description of reasoning (Sections 2.1 - 2.3) and criteria to select the best retrieval setup (new Section 2.4). The Tables with parameters were modified to provide rather general information related to the synergetic measurements rather than providing specific information related to GRASP. The corresponding discussions are added in Section 2.
Figure 3: After harmonization, instrument weighting, and retrieval setup optimization, it seems that individual instruments perform as well as the combination. If this is true, then what is the point of using the combination? If not, the advantages should be clearly explained.
Response:
We demonstrate in this paper that the synergetic retrieval performs better than the retrieval from a single instrument. To better show that the validation figures were changed, the tables with statistical validation characteristics are added and the discussion of the synergetic results is extended. To avoid confusion, Section 3 was subdivided to the 4 subduction:
3.1 SYREMIS/GRASP LEO+LEO synergy performance versus AERONET
3.2 SYREMIS/GRASP LEO+LEO inter-comparison with GRASP single instrument retrieval over AERONET
3.3 SYREMIS/GRASP LEO+GEO synergetic performance versus AERONET
3.4 SYREMIS/GRASP aerosol and surface products global intercomparison
Lines 275-276: “The validation criteria are the same as was used for TROPOMI/GRASP retrieval evaluation (Litvinov et al., 2024; Chen et al., 2024a).” Even though the validation criteria have been discussed in detail in other papers, it would be useful to the readers to have a summary here.
Response:
The validation criteria are presented and discussed in the new Section 2.4 Remote sensing tests to optimize synergetic retrieval
Lines 280-281: The authors use the phrase “instrument extracted from the synergy”. How is this different from using the measurements from a single instrument? For example, how are SYREMIS/TROPOMI and SYREMIS/OLCI different from GRASP/TROPOMI and GRASP/OLCI?
Response:
The phrase “instrument extracted from the synergy” means we extract retrieved properties (aerosol or surface ones) from the synergetic product at the time step corresponding to a certain instrument. This is different from the properties derived from the independent single-instrument retrieval. The clarification and discussion have been added to the paper.
Figure 4 seems to suggest that TROPOMI alone performs better than using all instruments together. Why combine the instruments then? Also, OLCI results seem to be much worse than those from TROPOMI. What advantage does using OLCI measurements provide?
The same is true for the AE and SSA results (Figures 5 and 6). In fact, for the AE, the results are very different from AERONET results. I actually do not see much of a use for satellite-derived results, single or combined. Similar comments apply to Figures 9, 10 and 11. The LEO-GEO combination (Figures 12-14) seems to suffer from similar issues, with Himawari providing almost all the information in that case and the other instruments having negligible contributions. In Figure 15 (that is not referenced in the text), what do SYREMIS/TROPOMI LEO+GEO and SYREMIS TROPOMI LEO+LEO represent? What instruments are covered in these combinations?
Response:
We have revised Section 3 and the figures to better highlight the key points as suggested.
We used the following notation, explained in Section 3:
- SYREMIS LEO+LEO: TROPOMI denotes the TROPOMI retrieval results obtained from the SYREMIS LEO+LEO synergistic retrieval (combining TROPOMI and OLCI).
- GRASP/TROPOMI refers to the single instrument GRASP retrieval from TROPOMI-only measurements
- SYREMIS LEO+GEO: TROPOMI refers to the TROPOMI retrieval results extracted from the SYREMIS LEO+GEO synergistic retrieval (combining TROPOMI, OLCI, and AHI).
Similar notation was used for OLCI data extraction from the synergy and single-instrument retrieval.
The obtained results show improvements in AOD in comparison to the single-instrument retrievals (updated Section 3.2, Figs. 8, 11, Tables 8, 10). AE and SSA from the synergy are generally comparable with those from the TROPOMI/GRASP single-instrument retrievals, with slightly improved performance in certain statistical metrics, such as an increased number of matchups (Figs, 9,10, Table 9). At the same time, the results for the datasets associated with OLCI measurements are considerably improved for AOD, SSA in comparison to the single-instrument GRASP/OLCI retrieval (Figs. 11, 13, Table 11) and AE similar to the data extract for TROPOMI (SYREMIS LEO+LEO: TROPOMI, Figs, 9,12). Overall, the criteria to select the best approach is based on the following selections: 1. Performance in 1). AOD, 2) AE, 3)SSA 4) Number of pixels passed the high quality filtering. According to this, the synergetic approach clearly shows considerable improvements in comparison to the single instrument retrieval (Section 3.2). As summarized in our conclusions, TROPOMI (with the most information among other considered instruments) serves as the main driver in the synergistic retrieval.
Figure 8: What does QA>=2 mean? The meaning of that expression needs to be clarified. Also, it seems that the performance of SYREMIS, compared with GRASP, is better over land. That contradicts the authors’ claim that the synergistic retrieval is better than the retrieval from individual instruments.
Response:
In TROPOMI/GRASP products, a quality flag (QA) is assigned to each pixel based on spectral fitting residuals and a moving window smoothness test, as described in Litvinov et al. (2024). Specifically, the QA flag has three categories: QA = 1: "marginal" retrieval quality; QA = 2: "good" retrieval quality; QA = 3: "very good" retrieval quality.
In the updated figures in Section 3 we applied the same data filtering approach. This does not change the main conclusion but helps to avoid confusion. The applied filter is described in Section 3. Results presented in Section 3.2 clearly show much better performance of the synergy in comparison to the single-instrument retrieval.
Lines 393-394: “One can see from Fig. 16 that, overall, SYREMIS/GRASP AOD retrieval corresponds well to VIIRS, MODIS, TROPOMI/GRASP and OLCI/GRASP products.” I disagree. It seems to me that TROPOMI/GRASP results agree well with VIIRS results, but SYREMIS results differ considerably, especially over the Sahara and the Middle East (bright surfaces?).
Response:
Section 3.4 was revised. We emphasise that we provide only preliminary results here. More detailed analysis and inter-comparison will be the subject of the second paper on the synergy. We discuss that the inter-comparison shows qualitative agreement with the single instrument retrieval. At the same time, though the synergy performs better over AERONET stations (SYREMIS/TROPOMI vs GRASP/TROPOMI, Section 3.2), qualitative inter-comparison shows essential differences in AOD and BRDF properties between synergy and single instrument retrieval. In particular, in terms of AOD spatial distribution, TROPOMI/GRASP appears to agree better with SNPP VIIRS/DB than TROPOMI/SYREMIS, particularly over bright surfaces in regions such as the Sahara and the Middle East.
We are currently working on separate papers that will focus on a comprehensive evaluation, including quantitative validation with AERONET data and detailed intercomparisons with other satellite products. On the other hand, thanks to the synergistic use of multiple sensors, multiple daily observational angles, and cumulative multi-day observations, SYREMIS retrievals can better characterize surface BRDF. For example, the dynamics of the second and third BRDF scaling parameters illustrated in Figures 18 and 19 demonstrate this capability. Therefore, we have high confidence in the accuracy of TROPOMI/SYREMIS aerosol retrievals over bright surfaces. Overall, we plan to conduct a more comprehensive evaluation of SYREMIS retrievals compared to single-instrument retrievals in a separate dedicated study.
Note in the updated validation plot above, the number of datapoints N in each plot is different compared to the number of datapoints in the original Figs. 2 and 3 in the first submission of the manuscript; this is because the updated plots were created with updated AERONET Level 2 products. The latest access date to AERONET is 2025 July 18, which is about 2 years after the creation of the original plots in the first version of the manuscript. “
-
AC2: 'Reply on RC2', Pavel Litvinov, 29 Aug 2025
-
RC3: 'Comment on egusphere-2025-1536', Anonymous Referee #3, 10 Jun 2025
This paper presents a synergistic approach for aerosol property retrieval from multiple satellites with the GRASP algorithm, called SYREMIS/GRASP. The intent is to combine the different types of information content into a coherent product that merges both LEO and GEO observations. This is a laudable goal and exists within a framework of the GRASP algorithm which has been developing this capability.
I do unfortunately have significant concerns about the fundamental approach of SYREMIS/GRASP, specifically the lack of direct accounting for measurement and model uncertainty, and the ad-hoc basis for the smoothness criteria. Furthermore, the approach was not explained in sufficient detail to be reproducible. I found myself struggling to understand how the ‘weighting’ parameters were derived, and what exactly was performed during retrieval setup optimization.
I do not believe the manuscript successfully makes the case that the figures and other results support the conclusions. Often the analysis and figures are poorly conceived, such as inappropriate histogram bin widths in figure 4, overuse of scatterplots which do not clearly indicate comparison skill, and statistical metrics that are calculated without analysis of what those values mean. The number of figures diminishes the impact. I counted 130 panels among 19 figures. In a peer-reviewed publication only the salient points should be reported. I think often figures were included without considering if they represent an appropriate analysis given the amount of data or other matters (such as panels e and f in figure 6).
Then there is the issue of scope and purpose. The conclusion states briefly that the high temporal resolution results could be used for “air quality studies, for monitoring aerosol transport, aerosol-cloud interaction etc.” I found myself wondering why aerosol data assimilation is not used instead. This has been done for years (one quick example: Yumimoto, et al 2016. https://doi.org/10.1002/2016GL069298). The nice thing about assimilation is that it should represent the correlation between parameters well. It is certainly more sophisticated than the selection of smoothness parameters in GRASP, the values of which I find difficult to connect to actual spatial/temporal variability. If there is some other purpose than the sort of studies one might do with a model that assimilates satellite data, it should be described.
Finally, grammar in the manuscript needs help. Several times I found myself unsure at to what was intended to be communicated.
The authors of this publication have produced excellent work in the past, and I believe they are able to do so with this manuscript. However, it requires major revision before I think it is ready for publication.
Specific comments:
Page 1, paragraph 1: Spell out SYREMIS
Page 3, some HARP2 and SPEXone references:
Fu, G., Rietjens, J., Laasner, R., van der Schaaf, L., van Hees, R., Yuan, Z., van Diedenhoven, B., Hannadige, N., Landgraf, J., Smit, M., Knobelspiesse, K., Cairns, B., Gao, M., Franz, B., Werdell, J., and Hasekamp, O.: Aerosol Retrievals From SPEXone on the NASA PACE Mission: First Results and Validation, Geophysical Research Letters, 52(4), e2024GL113525 , https://doi.org/https://doi.org/10.1029/2024GL113525, 2025.
Hasekamp, O. P., Fu, G., Rusli, S. P., Wu, L., Noia, A. D., aan de Brugh, J., Landgraf, J., Smit, J. M., Rietjens, J., and van Amerongen, A.: Aerosol measurements by SPEXone on the NASA PACE mission: expected retrieval capabilities, J. Quant. Spectrosc. Ra., 227, 170 - 184, https://doi.org/https://doi.org/10.1016/j.jqsrt.2019.02.006, 2019.
Werdell, P. J., Franz, B., Poulin, C., Allen, J., Cairns, B., Caplan, S., Cetinić, I., Craig, S., Gao, M., Hasekamp, O., Ibrahim, A., Knobelspiesse, K., Mannino, A., Martins, J. V., McKinna, L., Meister, G., Patt, F., Proctor, C., Rajapakshe, C., Ramos, I. S., Rietjens, J., Sayer, A., and Sirk, E.: Life after launch: a snapshot of the first six months of NASA's plankton, aerosol, cloud, ocean ecosystem (PACE) mission. in: Sensors, Systems, and Next-Generation Satellites XXVIII 131920E) SPIE., 2024.
McBride, B. A., Sienkiewicz, N., Xu, X., Puthukkudy, A., Fernandez-Borda, R., and Martins, J. V.: In-flight characterization of the Hyper-Angular Rainbow Polarimeter (HARP2) on the NASA PACE mission. in: Sensors, Systems, and Next-Generation Satellites XXVIII 131920H) SPIE., 2024.
Page 3 line 86 – condition ‘v’ says retrieval should be based on an ‘advanced inversion approach’ which is not defined. An advanced approach should also accounts for observation and model uncertainty, which does not seem the case in this paper. I like Maahn et al 2020 because it lays out the reasoning for this, and Povey 2015 and Sayer 2020’s take on measurement uncertainty. I do not believe one can honestly combine data synergistically without accounting for measurement uncertainty – how else can a retrieval algorithm reconcile biases or inconsistencies between the measurements? I know you used a ‘weighting’ parameter, but this doesn’t appear to be based upon an understanding of measurement uncertainty (I am a little unsure what was actually done with the weighting parameter, more on that later). Additionally, it is not clear to me if the output product has a prognostic error estimate, which seems like it would be important given the different sources of data.
Maahn, M., Turner, D. D., Löhnert, U., Posselt, D. J., Ebell, K., Mace, G. G., and Comstock, J. M.: Optimal Estimation Retrievals and Their Uncertainties: What Every Atmospheric Scientist Should Know, Bulletin of the American Meteorological Society, 101(9), E1512 - E1523 , https://doi.org/10.1175/BAMS-D-19-0027.1, 2020.
Povey, A. C. and Grainger, R. G.: Known and unknown unknowns: uncertainty estimation in satellite remote sensing, Atmos. Meas. Tech., 8(11), 4699--4718 , https://doi.org/10.5194/amt-8-4699-2015, 2015.
Sayer, A. M., Govaerts, Y., Kolmonen, P., Lipponen, A., Luffarelli, M., Mielonen, T., Patadia, F., Popp, T., Povey, A. C., Stebel, K., and Witek, M. L.: A review and framework for the evaluation of pixel-level uncertainty estimates in satellite aerosol remote sensing, Atmos. Meas. Tech., 13(2), 373--404 , https://doi.org/10.5194/amt-13-373-2020, 2020.
Table 1 It would be nice to add the hyperspectral resolution for TROPOMI
Page 7 section 2.1. It is a little unclear what exactly is being done with spectral ‘harmonization’. Is it as simple as just adding all spectral channels to the measurement vector? Or is something more being done? I feel like this step should have radiometric harmonization as well, ie removing biases between measurements.
Page 8, line 198-199. The method of weighting is explained as “realized with application of the different requirements on the standard deviation of measurements fitting for the different spectral bands”. This seems like an important description of what weighting is, but I don’t understand the language. “Standard deviation” is mentioned several times but I have no idea what this is at standard deviation of? My closest guess is that it has something to do with minimizing the difference between observations and AERONET data. If that is the basis for deriving these weights it needs to be described in far more detail, since the specifics of which AERONET data were used could drive your results. Also – what does it mean to ‘exchange measurements between weighting groups’? This is poorly explained.
Ultimately, I cannot say with any confidence that I understand how you are weighting the instruments.
Page 9, table 3 (and text). In some cases, you defined the temporal threshold in terms of hours, to which I presume means the associated parameter is held constant in that time period. Does this mean that beyond the time period there is no constraint at all? I am also attempting to reconcile this with the numerical smoothness constraints which are also provided. Additionally, I struggle to connect those values with physical reality – where do they come from? How do you justify the choice of values in the ‘relaxed’ case? Shouldn’t these be based on some analysis of aerosol temporal and spatial variability, such as Alexandrov et al 2004 or Shinozuka et al 2010 (or something more recent).
Alexandrov, M. D., Marshak, A., Cairns, B., Lacis, A. A., and Carlson, B. E.: Scaling Properties of Aerosol Optical Thickness Retrieved from Ground-Based Measurements, J. Atmos. Sci., 61(9), 1024--1039 , 2004.
Shinozuka, Y., Redemann, J., Livingston, J., Russell, P., Clarke, A., Howell, S., Freitag, S., O'Neill, N., Reid, E., Johnson, R., and others: Airborne observation of aerosol optical depth during ARCTAS: vertical profiles, inter-comparison, fine-mode fraction and horizontal variability, Atmos. Chem. Phys. Discuss., 10, 18315-18363 , 2010.
Figures 2 and 3 (although these comments apply in a similar nature to many other figures). What is one, in a broad sense, supposed to understand from these six plots? The text says ‘one can observe essential improvement’ from them. I strongly disagree. All six look very very similar. Perhaps the numerical statistical metrics written on each plot, but these are barely described. Which metric should we use? What specifically has improved from one plot to the other?
I realize that many algorithm developers in our community use scatterplots such as these to illustrate the success (or otherwise) of a given algorithm. The truth is that they are not appropriate, and figures 2 and 3 are a very good example of why. For starters, you are representing a parameter which is lognormally distributed on axes that are not, and the maximum value of the range is far larger than the majority of the data. So, you have most of the data represented in a tiny corner of the plot. It is impossible to see differences.
Furthermore you have plotted a linear regression to the data (why?) and there is an unexplained grey shaded areas which I presume are GCOS boundaries. The parameters of the linear fit, as well as the R2 value, are meaningless to explain what you are attempting to show, which is how well the GRASP retrieved AOD can represent the AERONET AOD.
Here’s how I would do this: use a mean bias plot (also known as a Tukey or Bland-Altman plot). Consider the data as pairs of corresponding GRASP and AERONET AOD. On the x-axis, plot the mean of each pair (AOD_grasp + AOD_aeronet)/2. Use a log scale for this axis. For the y-axis, plot the bias AOD_grasp-AOD_aeronet. Use a linear scale for this axis. This will expand the plotted area of interest and make it clear if there is a bias or any scale dependence. The y axis scatter will express differences in the unit that matters. Among the statistic metrics, I think the percentage fitting within GCOS thresholds is best (since they scale with AOD), but this should be explained, including with what you expect the values to be.
Page 13, paragraph 1 – similar to above: the results are described as ‘high quality’. What is your threshold for ‘high quality’? Which parameters matter, and what do you expect them to be?
Figures 4-11 – are all these figures necessary? What are we showing with the TROPOMI or OLCI extracts? Can this be demonstrated with less figures? My above comments for the scatterplots apply. The histograms are good, but the bin size should be adjusted for the number of parameters – for example the ‘green’ high optical depth case is not meaningfully presented, and this applies to many other cases too. Some of the plots don’t have enough data to be meaningful (ie fig 6 e and f).
Citation: https://doi.org/10.5194/egusphere-2025-1536-RC3 -
AC5: 'Reply on RC3', Pavel Litvinov, 29 Aug 2025
Reviewer3. (The replies to the comments are marked in red colour in the Supplement PDF file)
This paper presents a synergistic approach for aerosol property retrieval from multiple satellites with the GRASP algorithm, called SYREMIS/GRASP. The intent is to combine the different types of information content into a coherent product that merges both LEO and GEO observations. This is a laudable goal and exists within a framework of the GRASP algorithm which has been developing this capability.
I do unfortunately have significant concerns about the fundamental approach of SYREMIS/GRASP, specifically the lack of direct accounting for measurement and model uncertainty, and the ad-hoc basis for the smoothness criteria. Furthermore, the approach was not explained in sufficient detail to be reproducible. I found myself struggling to understand how the ‘weighting’ parameters were derived, and what exactly was performed during retrieval setup optimization.
I do not believe the manuscript successfully makes the case that the figures and other results support the conclusions. Often the analysis and figures are poorly conceived, such as inappropriate histogram bin widths in figure 4, overuse of scatterplots which do not clearly indicate comparison skill, and statistical metrics that are calculated without analysis of what those values mean. The number of figures diminishes the impact. I counted 130 panels among 19 figures. In a peer-reviewed publication only the salient points should be reported. I think often figures were included without considering if they represent an appropriate analysis given the amount of data or other matters (such as panels e and f in figure 6).
Response:
The criticism of the review was accounted for in the revised manuscript. In general, we emphasized that the SYREMIS/GRASP approach is based on fundamental principles of the GRASP algorithm. Clear references to the fundamental basis of the GRASP algorithm were provided for readers who are not familiar with the LSM multi-term concept. We also provided a discussion on how this fundamental basis is used in the synergetic approach. In particular, the discussion of the measurements preparation for the synergy, forward models, instrument “weighting”, and a priori constraints in the SYREMIS/GRASP approach was introduced in Sections 2.1-2.4. Section 2.4 describes in detail the tests that were used to test and select the optimized approach.
In this manuscript, we provide comprehensive considerations of the SYREMIS/GRASP results. In particular:
1. Section 3.1 is devoted to the consideration of the consistencies in aerosol and surface retrieval from the different instruments in the LEO+LEO synergy.- Section 3.2 shows the advantages of the LEO+LEO synergy over the single instrument retrieval
- Section 3.3 considers LEO+GEO synergy
- Section 3.4 shows preliminary results of the inter-comparison on a global scale.
Such comprehensive consideration requires a considerable number of figures and tables for all considered aerosol parameters (AOD, AE, SSA). We think that the number of figures is adequate for such an analysis. Nevertheless, we agree with the reviewer that more discussion and analysis should be added. Therefore, we have revised the presentation of figures, included tables to summarize the key statistical metrics, and considerably extended the discussion and the analysis all over the manuscript.
Then there is the issue of scope and purpose. The conclusion states briefly that the high temporal resolution results could be used for “air quality studies, for monitoring aerosol transport, aerosol-cloud interaction etc.” I found myself wondering why aerosol data assimilation is not used instead. This has been done for years (one quick example: Yumimoto, et al 2016. https://doi.org/10.1002/2016GL069298). The nice thing about assimilation is that it should represent the correlation between parameters well. It is certainly more sophisticated than the selection of smoothness parameters in GRASP, the values of which I find difficult to connect to actual spatial/temporal variability. If there is some other purpose than the sort of studies one might do with a model that assimilates satellite data, it should be described.
Response:
The SYREMIS multi-instrument synergetic retrieval approach converts the merged L1 measurements from multiple satellites into a synergetic L2 aerosol and surface product. As it was presented in “Introduction”, the purpose of the SYREMIS GRASP approach is to improve L2 output both in terms of the extended aerosol/surface properties and better temporal resolution. Certainly, this L2 output can be used for different purposes, including assimilation into models (see for example, Garrigues, S., Remy, S., Chimot, J., et al., Atmos. Chem. Phys., 22, 14657–14692, https://doi.org/10.5194/acp-22-14657-2022, 2022). Nevertheless, this is not the only goal (and even not the main goal) of the satellite retrieval. More information can be extracted from the satellite measurements (physical, chemical, temporal, etc.), the wider range of applications of the satellite L2 product can be: from the monitoring of the state of the atmosphere and surface to the investigation of physical/chemical processes in the atmosphere. As it was described in the “Introduction”, the main product from most satellites now is AOD. In this paper, we provide a synergetic SYREMIS approach that allows overcoming a number of the existing limitations. The approach is based on the GRASP algorithm (Dubovik et al., 2011, 2021a) with already demonstrated possibilities to account for the temporal and spatial relations between aerosol/surface characteristics. The key elements of the GRASP algorithm are smoothness constraints. Compared to assumptions (based on limited observations) used in traditional algorithms such as the Dark Target algorithm (Levy 2016), the GRASP smoothness constraints allow more natural separation of surface and atmospheric signals based on their inherent differences in the spatial, temporal, and spectral variabilities. A recently developed merged satellite aerosol product suite XAERDT (https://amt.copernicus.org/articles/17/5455/2024/) has a similar purpose and is based on the separate processing of six different satellite measurements using the Dark Target algorithm. A manuscript is under preparation, describing the detailed comparison between SYREMIS and XAERDT products. We observed unique advantages from the SYREMIS product, which will be the subject of the studies.
The selection of smoothness parameters in GRASP is based on rigorous retrieval tests (see new Section 2.4) over representative global sites. These tests minimize the observation and model uncertainties in GRASP, and the resulting smoothness constraints possess solid physical meanings related to the spatial/temporal/spectral variabilities of the retrieved parameters.
Finally, grammar in the manuscript needs help. Several times I found myself unsure at to what was intended to be communicated.
The authors of this publication have produced excellent work in the past, and I believe they are able to do so with this manuscript. However, it requires major revision before I think it is ready for publication.
Response:
The manuscript was reviewed, and the grammar was improved.
Specific comments:
Page 1, paragraph 1: Spell out SYREMIS
Response:
SYREMIS (SYnergetic REtrieval from multi-MISsion instruments) is introduced
Page 3, some HARP2 and SPEXone references:
Fu, G., Rietjens, J., Laasner, R., van der Schaaf, L., van Hees, R., Yuan, Z., van Diedenhoven, B., Hannadige, N., Landgraf, J., Smit, M., Knobelspiesse, K., Cairns, B., Gao, M., Franz, B., Werdell, J., and Hasekamp, O.: Aerosol Retrievals From SPEXone on the NASA PACE Mission: First Results and Validation, Geophysical Research Letters, 52(4), e2024GL113525 , https://doi.org/https://doi.org/10.1029/2024GL113525, 2025.
Hasekamp, O. P., Fu, G., Rusli, S. P., Wu, L., Noia, A. D., aan de Brugh, J., Landgraf, J., Smit, J. M., Rietjens, J., and van Amerongen, A.: Aerosol measurements by SPEXone on the NASA PACE mission: expected retrieval capabilities, J. Quant. Spectrosc. Ra., 227, 170 - 184, https://doi.org/https://doi.org/10.1016/j.jqsrt.2019.02.006, 2019.
Werdell, P. J., Franz, B., Poulin, C., Allen, J., Cairns, B., Caplan, S., Cetinić, I., Craig, S., Gao, M., Hasekamp, O., Ibrahim, A., Knobelspiesse, K., Mannino, A., Martins, J. V., McKinna, L., Meister, G., Patt, F., Proctor, C., Rajapakshe, C., Ramos, I. S.,
Rietjens, J., Sayer, A., and Sirk, E.: Life after launch: a snapshot of the first six months of NASA's plankton, aerosol, cloud, ocean ecosystem (PACE) mission. in: Sensors, Systems, and Next-Generation Satellites XXVIII 131920E) SPIE., 2024.
McBride, B. A., Sienkiewicz, N., Xu, X., Puthukkudy, A., Fernandez-Borda, R., and Martins, J. V.: In-flight characterization of the Hyper-Angular Rainbow Polarimeter (HARP2) on the NASA PACE mission. in: Sensors, Systems, and Next-Generation Satellites XXVIII 131920H) SPIE., 2024.
Response:
Many thanks for the suggested references. They were introduced into the manuscript.
Page 3 line 86 – condition ‘v’ says retrieval should be based on an ‘advanced inversion approach’ which is not defined.
Response:
We reformulated what we mean by “advanced inversion approach” as the one that satisfies the conditions (v) and (vi):
(v) the retrieval should be based on the flexible forward models adaptable to the information content of the measurements;
(vi) the retrieval should be able to account for diverse measurements with, possibly, different calibration accuracy, different spectral and spatial resolution;
An advanced approach should also accounts for observation and model uncertainty, which does not seem the case in this paper. I like Maahn et al 2020 because it lays out the reasoning for this, and Povey 2015 and Sayer 2020’s take on measurement uncertainty. I do not believe one can honestly combine data synergistically without accounting for measurement uncertainty – how else can a retrieval algorithm reconcile biases or inconsistencies between the measurements? I know you used a ‘weighting’ parameter, but this doesn’t appear to be based upon an understanding of measurement uncertainty (I am a little unsure what was actually done with the weighting parameter, more on that later). Additionally, it is not clear to me if the output product has a prognostic error estimate, which seems like it would be important given the different sources of data.
Maahn, M., Turner, D. D., Löhnert, U., Posselt, D. J., Ebell, K., Mace, G. G., and Comstock, J. M.: Optimal Estimation Retrievals and Their Uncertainties: What Every Atmospheric Scientist Should Know, Bulletin of the American Meteorological Society, 101(9), E1512 - E1523 , https://doi.org/10.1175/BAMS-D-19-0027.1, 2020.
Povey, A. C. and Grainger, R. G.: Known and unknown unknowns: uncertainty estimation in satellite remote sensing, Atmos. Meas. Tech., 8(11), 4699--4718 , https://doi.org/10.5194/amt-8-4699-2015, 2015.
Sayer, A. M., Govaerts, Y., Kolmonen, P., Lipponen, A., Luffarelli, M., Mielonen, T., Patadia, F., Popp, T., Povey, A. C., Stebel, K., and Witek, M. L.: A review and framework for the evaluation of pixel-level uncertainty estimates in satellite aerosol remote sensing, Atmos. Meas. Tech., 13(2), 373--404 , https://doi.org/10.5194/amt-13-373-2020, 2020.
Response:
We don’t agree with the reviewer regarding his statements about “observation uncertainties, which does not seem the case in this paper”. In the SYREMIS/GRASP approach, they are accounted for with the standard deviation of the uncertainties which are associated with the standard deviation of the measurement fit (Dubovik et al., 2011, 2021a) and contribute to the instrument weighting. Therefore, the weights (importance of the measurements) are driven by known (or assumed) standard deviations of uncertainties in each data set: the smaller the standard deviation of measurement fitting is required in the retrieval, the bigger “weight” of such measurements can be set up in the synergy. In the frame of the GRASP Multi-Term LSM concept, the detailed description of the data weighting Multi-Term LSM concept can be found in (Dubovik et al. 2004, Dubovik et al. 2021a), and brief discussion is also provided in Sections 2.2, 2.4, and Appendix A (Eqs.(3A)-(12A)) of the revised manuscript.
Table 1 It would be nice to add the hyperspectral resolution for TROPOMI
Response:
Thank you for pointing this out. The spectral resolutions are added in Table 1.
Page 7 section 2.1. It is a little unclear what exactly is being done with spectral ‘harmonization’. Is it as simple as just adding all spectral channels to the measurement vector? Or is something more being done? I feel like this step should have radiometric harmonization as well, ie removing biases between measurements.
Response:
Speaking about spectral “harmonization,” we assumed the spectral band selection in synergy from different instruments. This is corrected in the text to avoid misunderstanding. Checking for biases is part of the optimization remote sensing test (ii) described in Section 2.4 in the revised manuscript. Here, we apply the GROSAT/GRASP approach, performing AERONET/satellite synergetic retrieval, and compare the derived surface products. Depending on the result of the GROSAT/GRASP test, the decision about instrument “bias” is taken (Section 2.2 and 2.4, Appendix A).
Page 8, line 198-199. The method of weighting is explained as “realized with application of the different requirements on the standard deviation of measurements fitting for the different spectral bands”. This seems like an important description of what weighting is, but I don’t understand the language. “Standard deviation” is mentioned several times but I have no idea what this is at standard deviation of? My closest guess is that it has something to do with minimizing the difference between observations and AERONET data. If that is the basis for deriving these weights it needs to be described in far more detail, since the specifics of which AERONET data were used could drive your results. Also – what does it mean to ‘exchange measurements between weighting groups’? This is poorly explained.
Ultimately, I cannot say with any confidence that I understand how you are weighting the instruments.
Response:
Indeed, the “weighting” refers to the importance of the different sets of measurements or a priori data in the global fitting of all input data by the forward models. In brief, in the frame of Multi-Term LSM (Dubovik et al., 2011, 20021a), the “weights” corresponds to Lagrange multipliers that are defined and the squared ratios of standard deviations of the standard deviation of uncertainties in each data set to the standard deviation of uncertainties in selected (“main) data set (i.e. in the radiances of the first satellite). Therefore, determining or assuming the standard deviation of uncertainties in each data set allows for defining the importance of the considered measurements and a priori data. The quantitative description of the “weighting” is described in Sections 2.2, 2.4, and Appendix A of the revised manuscript with reference to (Dubovik et al., 2011, 20021a) for more details..
Page 9, table 3 (and text). In some cases, you defined the temporal threshold in terms of hours, to which I presume means the associated parameter is held constant in that time period. Does this mean that beyond the time period there is no constraint at all? I am also attempting to reconcile this with the numerical smoothness constraints which are also provided. Additionally, I struggle to connect those values with physical reality – where do they come from? How do you justify the choice of values in the ‘relaxed’ case? Shouldn’t these be based on some analysis of aerosol temporal and spatial variability, such as Alexandrov et al 2004 or Shinozuka et al 2010 (or something more recent).
Alexandrov, M. D., Marshak, A., Cairns, B., Lacis, A. A., and Carlson, B. E.: Scaling Properties of Aerosol Optical Thickness Retrieved from Ground-Based Measurements, J. Atmos. Sci., 61(9), 1024--1039 , 2004.
Shinozuka, Y., Redemann, J., Livingston, J., Russell, P., Clarke, A., Howell, S., Freitag, S., O'Neill, N., Reid, E., Johnson, R., and others: Airborne observation of aerosol optical depth during ARCTAS: vertical profiles, inter-comparison, fine-mode fraction and horizontal variability, Atmos. Chem. Phys. Discuss., 10, 18315-18363 , 2010.
Response:
The developed retrieval relies on a priori constraints of the temporal variability of retrieved aerosol and surface reflectance parameters. These constraints are introduced as an a priori estimate of first derivatives (approximated by the finite differences) with respect to time. The means of these estimates were assumed as normally distributed values with zero means and standard deviations assigned based on the known variability of each parameter. Actually, the use of the limitations on finite differences when the limits are applied to differences between the parameters divided by the time period when they were observed is a very convenient way to ensure flexibility in applying time constants. Indeed, such constraints allow much larger variability for parameters corresponding to more distant in time observations compared to the parameters from observations that are very close in time. However, in cases when observations are nearly simultaneous or very close in time, the finite differences may have very large values that can produce an imbalance in practical fitting. To avoid such difficulties, we used the temporal threshold, which considerably constrains the temporal variability within a specified period of time (threshold). Full description can be found in (Dubovik et al., 2011, 20021a). It should be noted that in Section 2.4, we emphasize that the values provided in Table 4 are suggested based on rather general considerations of how each parameter can vary in time (how high the temporal derivatives can be), and in practice, the exact choice of the parameters is usually defined based on sensitivity studies and retrieval trials with real data. Moreover, these values are not really unique since quite similar retrieval performance of aerosol and surface properties can be obtained within a certain range of the constraints. Nevertheless, the chosen parameters (“weights”) are generally within an optimal range in the sense that they adequately reflect the tendencies in temporal dependencies of aerosol and surface properties adapted to the information content in the LEO+LEO and LEO+GEO synergies. For example, in the case of LEO+LEO synergies, the time constraints on the retrieved aerosol properties are nearly irrelevant, while for the case of LEO+GEO synergies, such constraints are very efficient and need to be carefully defined.
Figures 2 and 3 (although these comments apply in a similar nature to many other figures). What is one, in a broad sense, supposed to understand from these six plots? The text says ‘one can observe essential improvement’ from them. I strongly disagree. All six look very very similar. Perhaps the numerical statistical metrics written on each plot, but these are barely described. Which metric should we use? What specifically has improved from one plot to the other?
Response:
Thank you for pointing out this issue. We have comprehensively revised the presentation of figures in the manuscript, and include tables to summarize the key statistical metrics for intercomparison. A table (Table A1) is used to compare Figures 2 and 3 (before and after harmonization). For example, the fulfilment of GCOS requirement is improved from 34% to 48%, and bias for low AOD improved from +0.06 to +0.01.
Table A1. Summary of all-instrument AOD accuracy statistics for the two tests (“Before”, “After”) in Figures 2 and 3.
GCOS(%)
Bias (AOD)
Bias (AOD<0.2)
R
RMSE
slope
intercept
Figxx(a) (“Before”)
34%
0.04
0.06
0.89
0.16
0.88
0.08
Figxx(b) (“After”)
48%
-0.02
0.01
0.90
0.14
0.88
0.02
I realize that many algorithm developers in our community use scatterplots such as these to illustrate the success (or otherwise) of a given algorithm. The truth is that they are not appropriate, and figures 2 and 3 are a very good example of why. For starters, you are representing a parameter which is lognormally distributed on axes that are not, and the maximum value of the range is far larger than the majority of the data. So, you have most of the data represented in a tiny corner of the plot. It is impossible to see differences.
Furthermore you have plotted a linear regression to the data (why?) and there is an unexplained grey shaded areas which I presume are GCOS boundaries. The parameters of the linear fit, as well as the R2 value, are meaningless to explain what you are attempting to show, which is how well the GRASP retrieved AOD can represent the AERONET AOD.
Here’s how I would do this: use a mean bias plot (also known as a Tukey or Bland-Altman plot). Consider the data as pairs of corresponding GRASP and AERONET AOD. On the x-axis, plot the mean of each pair (AOD_grasp + AOD_aeronet)/2. Use a log scale for this axis. For the y-axis, plot the bias AOD_grasp-AOD_aeronet. Use a linear scale for this axis. This will expand the plotted area of interest and make it clear if there is a bias or any scale dependence. The y axis scatter will express differences in the unit that matters. Among the statistic metrics, I think the percentage fitting within GCOS thresholds is best (since they scale with AOD), but this should be explained, including with what you expect the values to be.
Response:
Thanks for the suggestions! Yes, the gray shaded area in the figure represents the GCOS uncertainty boundaries. We have added this clarification to the figure caption for better understanding. To illustrate the scale dependence of mean bias before and after harmonization, instrument weighting, and retrieval setup optimization, the Tukey plot is presented in the supplement PDF file. Overall, the revised manuscript presents scatter plots and tables that compare the main statistical metrics more clearly than in the initial submission.
Tukey plot corresponding to fig.4. Each panel corresponds to the scatter plot and histogram in the same relative location in Fig.4. In a Tukey plot, the x-axis (log-scaled) is the mean of each SYREMIS AOD and AERONET AOD pair, y-axis is SYREMIS AOD minus AERONET AOD for the same pair of data.
Page 13, paragraph 1 – similar to above: the results are described as ‘high quality’. What is your threshold for ‘high quality’? Which parameters matter, and what do you expect them to be?
Response:
Thanks very much for the suggestions! In the revised manuscript, we have included tables throughout to present and compare the validation statistics more clearly and systematically. As shown in Table 6, the high-quality AOD retrievals from SYREMIS achieve approximately 54% within the GCOS uncertainty requirement over land, with a correlation coefficient (R) of ~0.89. In contrast, the S5P/TROPOMI single-instrument retrievals show ~48% within GCOS and an R value of ~0.85.
Figures 4-11 – are all these figures necessary? What are we showing with the TROPOMI or OLCI extracts? Can this be demonstrated with less figures? My above comments for the scatterplots apply. The histograms are good, but the bin size should be adjusted for the number of parameters – for example the ‘green’ high optical depth case is not meaningfully presented, and this applies to many other cases too. Some of the plots don’t have enough data to be meaningful (ie fig 6 e and f).
Response:
Thanks very much for the suggestions! The size bin of each histogram has been adjusted in the revised manuscript. And we revised the presentation of figures and used consistent descriptions of the TROPOMI or OLCI extracts from SYREMIS synergy, for example, SYREMIS LEO+LEO: S5p/TROPOMI, SYREMIS LEO+LEO: S3A/OLCI, in contrast with GRASP single instrument results: GRASP/TROPOMI and GRASP/OLCI.
Note in the updated validation plot above, the number of datapoints N in each plot is different compared to the number of datapoints in the original Figs. 2 and 3 in the first submission of the manuscript; this is because the updated plots were created with updated AERONET Level 2 products. The latest access date to AERONET is 2025 July 18, which is about 2 years after the creation of the original plots in the first version of the manuscript. “
-
AC5: 'Reply on RC3', Pavel Litvinov, 29 Aug 2025
-
RC4: 'Comment on egusphere-2025-1536', Anonymous Referee #4, 13 Jun 2025
Review of Litvinov et al., “Synergistic Retrieval from Multi-Mission Spaceborne Measurements for Enhancement of Aerosol and Surface Characteristics”
Summary:
This paper introduces a variant of the GRASP approach, “SYREMIS/GRASP”, that performs synergistic aerosol property retrieval from observations provided by a combination of platforms in Low Earth Orbit (LEO) and Geostationary orbits (GEO). The concept is demonstrated as LEO+LEO using the combination of S3A/OLCI, S3B/OLCI and S5P/TropOMI and as LEO+GEO by adding Himawari-8. The effort includes aggregation all data on common grids (temporal / spatial), determining “weights” of each observation based on information content, and applying the forward model (GRASP) that represents the combination of information. Some assumptions about spatial and temporal smoothness or variability are necessary in regards to aerosol and surface variabilities. Evaluation, compared to AERONET, is performed for retrieved parameters of Aerosol optical depth (AOD), Angstrom Exponent (AExp) and single scattering albedo (SSA), demonstrating “added value” of synergistic retrievals as compared to GRASP-based single-instrument retrievals. On a global scale, retrieved-AOD is compared to the VIIRS Deep Blue product, and surface BRDF is compared to historical MODIS-derived products. The paper concludes with suggestions that “such extended aerosol characterization with high temporal resolution is required in air quality studies, for monitoring aerosol transport, aerosol-cloud interaction, etc.”
Evaluation:
I am not convinced by this paper at all. Rather than a comprehensive step-by-step approach, the presentation feels more like magic. What is this SYREMIS anyway? Where is the acronym defined? What is it doing? How are sensors weighted? I don’t understand the paragraph (Lines ~195) about what is meant by “close spectral measurements” and “different accuracy of radiometric calibration and different bandwidths of the observations”. Nor do I understand the claim that the “weight of TROPOMI … should be stronger … can be explained ... by higher information content and better radiometric accuracy.” (maybe references?). What information about aerosols and surfaces is contained by each observations/measurement? Where does layer height information come from? Cloud masking?
I am not convinced that the scatter plots are significantly improved by all-instruments versus single-only. And if better accuracy, then so what? What are the restrictions if all data must be collocated perfectly? How are instrument calibrations and angular differences included? I just have questions and more questions about GRASP, how the data are selected and collocated, how to deal with missing data, poor calibrations, etc, etc. etc. In fact, the term “etc” is used way too many times during this paper. The figures need more complete captions, the titles of panels need more clarity, and the density scatterplots need colorbars. I do not understand what is sensor-specific “extract” in any of the figures. In terms of the SYREMIS method, is it slow? Fast? Can it be used in operations? How idoes this algorithm improve air quality applications (e.g. estimating aerosol at the surface)?
What does it mean to compare SYREMIS with VIIRS for AOD and with MODIS for BRDF? Where and why are their big differences? Because the heritage products do not have sufficient information content? Or is the new technique wrong? Fig 16 differences of 0.4 in AOD are huge, so are 0.1 differences over ocean. Figure 19 refers to 1st 2nd and 3rd parameters, I see only 2nd and 3rd.
Finally, this paper needs severe editing. Many words, sentences and paragraphs make no sense. There are incomplete sentences and an overuse of the term “etc”. Why is “weight” in quotes every time? Also many acronyms need defining – including SYREMIS, POLDER-3/PARASOL, PACE, HARP, and maybe every satellite missions.
Frankly, while I am disappointed with the authors for sending out such a poor draft of a paper, I am almost angry with the EGUsphere editors for letting this paper go to review. The technique is likely useful, and the community needs good products. However, the paper is nowhere close to being acceptable in its present form.
Citation: https://doi.org/10.5194/egusphere-2025-1536-RC4 -
AC3: 'Reply on RC4', Pavel Litvinov, 29 Aug 2025
Reviewer3. (The replies to the comments are marked in red colour in the Supplement PDF file)
This paper presents a synergistic approach for aerosol property retrieval from multiple satellites with the GRASP algorithm, called SYREMIS/GRASP. The intent is to combine the different types of information content into a coherent product that merges both LEO and GEO observations. This is a laudable goal and exists within a framework of the GRASP algorithm which has been developing this capability.
I do unfortunately have significant concerns about the fundamental approach of SYREMIS/GRASP, specifically the lack of direct accounting for measurement and model uncertainty, and the ad-hoc basis for the smoothness criteria. Furthermore, the approach was not explained in sufficient detail to be reproducible. I found myself struggling to understand how the ‘weighting’ parameters were derived, and what exactly was performed during retrieval setup optimization.
I do not believe the manuscript successfully makes the case that the figures and other results support the conclusions. Often the analysis and figures are poorly conceived, such as inappropriate histogram bin widths in figure 4, overuse of scatterplots which do not clearly indicate comparison skill, and statistical metrics that are calculated without analysis of what those values mean. The number of figures diminishes the impact. I counted 130 panels among 19 figures. In a peer-reviewed publication only the salient points should be reported. I think often figures were included without considering if they represent an appropriate analysis given the amount of data or other matters (such as panels e and f in figure 6).
Response:
The criticism of the review was accounted for in the revised manuscript. In general, we emphasized that the SYREMIS/GRASP approach is based on fundamental principles of the GRASP algorithm. Clear references to the fundamental basis of the GRASP algorithm were provided for readers who are not familiar with the LSM multi-term concept. We also provided a discussion on how this fundamental basis is used in the synergetic approach. In particular, the discussion of the measurements preparation for the synergy, forward models, instrument “weighting”, and a priori constraints in the SYREMIS/GRASP approach was introduced in Sections 2.1-2.4. Section 2.4 describes in detail the tests that were used to test and select the optimized approach.
In this manuscript, we provide comprehensive considerations of the SYREMIS/GRASP results. In particular:
1. Section 3.1 is devoted to the consideration of the consistencies in aerosol and surface retrieval from the different instruments in the LEO+LEO synergy.- Section 3.2 shows the advantages of the LEO+LEO synergy over the single instrument retrieval
- Section 3.3 considers LEO+GEO synergy
- Section 3.4 shows preliminary results of the inter-comparison on a global scale.
Such comprehensive consideration requires a considerable number of figures and tables for all considered aerosol parameters (AOD, AE, SSA). We think that the number of figures is adequate for such an analysis. Nevertheless, we agree with the reviewer that more discussion and analysis should be added. Therefore, we have revised the presentation of figures, included tables to summarize the key statistical metrics, and considerably extended the discussion and the analysis all over the manuscript.
Then there is the issue of scope and purpose. The conclusion states briefly that the high temporal resolution results could be used for “air quality studies, for monitoring aerosol transport, aerosol-cloud interaction etc.” I found myself wondering why aerosol data assimilation is not used instead. This has been done for years (one quick example: Yumimoto, et al 2016. https://doi.org/10.1002/2016GL069298). The nice thing about assimilation is that it should represent the correlation between parameters well. It is certainly more sophisticated than the selection of smoothness parameters in GRASP, the values of which I find difficult to connect to actual spatial/temporal variability. If there is some other purpose than the sort of studies one might do with a model that assimilates satellite data, it should be described.
Response:
The SYREMIS multi-instrument synergetic retrieval approach converts the merged L1 measurements from multiple satellites into a synergetic L2 aerosol and surface product. As it was presented in “Introduction”, the purpose of the SYREMIS GRASP approach is to improve L2 output both in terms of the extended aerosol/surface properties and better temporal resolution. Certainly, this L2 output can be used for different purposes, including assimilation into models (see for example, Garrigues, S., Remy, S., Chimot, J., et al., Atmos. Chem. Phys., 22, 14657–14692, https://doi.org/10.5194/acp-22-14657-2022, 2022). Nevertheless, this is not the only goal (and even not the main goal) of the satellite retrieval. More information can be extracted from the satellite measurements (physical, chemical, temporal, etc.), the wider range of applications of the satellite L2 product can be: from the monitoring of the state of the atmosphere and surface to the investigation of physical/chemical processes in the atmosphere. As it was described in the “Introduction”, the main product from most satellites now is AOD. In this paper, we provide a synergetic SYREMIS approach that allows overcoming a number of the existing limitations. The approach is based on the GRASP algorithm (Dubovik et al., 2011, 2021a) with already demonstrated possibilities to account for the temporal and spatial relations between aerosol/surface characteristics. The key elements of the GRASP algorithm are smoothness constraints. Compared to assumptions (based on limited observations) used in traditional algorithms such as the Dark Target algorithm (Levy 2016), the GRASP smoothness constraints allow more natural separation of surface and atmospheric signals based on their inherent differences in the spatial, temporal, and spectral variabilities. A recently developed merged satellite aerosol product suite XAERDT (https://amt.copernicus.org/articles/17/5455/2024/) has a similar purpose and is based on the separate processing of six different satellite measurements using the Dark Target algorithm. A manuscript is under preparation, describing the detailed comparison between SYREMIS and XAERDT products. We observed unique advantages from the SYREMIS product, which will be the subject of the studies.
The selection of smoothness parameters in GRASP is based on rigorous retrieval tests (see new Section 2.4) over representative global sites. These tests minimize the observation and model uncertainties in GRASP, and the resulting smoothness constraints possess solid physical meanings related to the spatial/temporal/spectral variabilities of the retrieved parameters.
Finally, grammar in the manuscript needs help. Several times I found myself unsure at to what was intended to be communicated.
The authors of this publication have produced excellent work in the past, and I believe they are able to do so with this manuscript. However, it requires major revision before I think it is ready for publication.
Response:
The manuscript was reviewed, and the grammar was improved.
Specific comments:
Page 1, paragraph 1: Spell out SYREMIS
Response:
SYREMIS (SYnergetic REtrieval from multi-MISsion instruments) is introduced
Page 3, some HARP2 and SPEXone references:
Fu, G., Rietjens, J., Laasner, R., van der Schaaf, L., van Hees, R., Yuan, Z., van Diedenhoven, B., Hannadige, N., Landgraf, J., Smit, M., Knobelspiesse, K., Cairns, B., Gao, M., Franz, B., Werdell, J., and Hasekamp, O.: Aerosol Retrievals From SPEXone on the NASA PACE Mission: First Results and Validation, Geophysical Research Letters, 52(4), e2024GL113525 , https://doi.org/https://doi.org/10.1029/2024GL113525, 2025.
Hasekamp, O. P., Fu, G., Rusli, S. P., Wu, L., Noia, A. D., aan de Brugh, J., Landgraf, J., Smit, J. M., Rietjens, J., and van Amerongen, A.: Aerosol measurements by SPEXone on the NASA PACE mission: expected retrieval capabilities, J. Quant. Spectrosc. Ra., 227, 170 - 184, https://doi.org/https://doi.org/10.1016/j.jqsrt.2019.02.006, 2019.
Werdell, P. J., Franz, B., Poulin, C., Allen, J., Cairns, B., Caplan, S., Cetinić, I., Craig, S., Gao, M., Hasekamp, O., Ibrahim, A., Knobelspiesse, K., Mannino, A., Martins, J. V., McKinna, L., Meister, G., Patt, F., Proctor, C., Rajapakshe, C., Ramos, I. S.,
Rietjens, J., Sayer, A., and Sirk, E.: Life after launch: a snapshot of the first six months of NASA's plankton, aerosol, cloud, ocean ecosystem (PACE) mission. in: Sensors, Systems, and Next-Generation Satellites XXVIII 131920E) SPIE., 2024.
McBride, B. A., Sienkiewicz, N., Xu, X., Puthukkudy, A., Fernandez-Borda, R., and Martins, J. V.: In-flight characterization of the Hyper-Angular Rainbow Polarimeter (HARP2) on the NASA PACE mission. in: Sensors, Systems, and Next-Generation Satellites XXVIII 131920H) SPIE., 2024.
Response:
Many thanks for the suggested references. They were introduced into the manuscript.
Page 3 line 86 – condition ‘v’ says retrieval should be based on an ‘advanced inversion approach’ which is not defined.
Response:
We reformulated what we mean by “advanced inversion approach” as the one that satisfies the conditions (v) and (vi):
(v) the retrieval should be based on the flexible forward models adaptable to the information content of the measurements;
(vi) the retrieval should be able to account for diverse measurements with, possibly, different calibration accuracy, different spectral and spatial resolution;
An advanced approach should also accounts for observation and model uncertainty, which does not seem the case in this paper. I like Maahn et al 2020 because it lays out the reasoning for this, and Povey 2015 and Sayer 2020’s take on measurement uncertainty. I do not believe one can honestly combine data synergistically without accounting for measurement uncertainty – how else can a retrieval algorithm reconcile biases or inconsistencies between the measurements? I know you used a ‘weighting’ parameter, but this doesn’t appear to be based upon an understanding of measurement uncertainty (I am a little unsure what was actually done with the weighting parameter, more on that later). Additionally, it is not clear to me if the output product has a prognostic error estimate, which seems like it would be important given the different sources of data.
Maahn, M., Turner, D. D., Löhnert, U., Posselt, D. J., Ebell, K., Mace, G. G., and Comstock, J. M.: Optimal Estimation Retrievals and Their Uncertainties: What Every Atmospheric Scientist Should Know, Bulletin of the American Meteorological Society, 101(9), E1512 - E1523 , https://doi.org/10.1175/BAMS-D-19-0027.1, 2020.
Povey, A. C. and Grainger, R. G.: Known and unknown unknowns: uncertainty estimation in satellite remote sensing, Atmos. Meas. Tech., 8(11), 4699--4718 , https://doi.org/10.5194/amt-8-4699-2015, 2015.
Sayer, A. M., Govaerts, Y., Kolmonen, P., Lipponen, A., Luffarelli, M., Mielonen, T., Patadia, F., Popp, T., Povey, A. C., Stebel, K., and Witek, M. L.: A review and framework for the evaluation of pixel-level uncertainty estimates in satellite aerosol remote sensing, Atmos. Meas. Tech., 13(2), 373--404 , https://doi.org/10.5194/amt-13-373-2020, 2020.
Response:
We don’t agree with the reviewer regarding his statements about “observation uncertainties, which does not seem the case in this paper”. In the SYREMIS/GRASP approach, they are accounted for with the standard deviation of the uncertainties which are associated with the standard deviation of the measurement fit (Dubovik et al., 2011, 2021a) and contribute to the instrument weighting. Therefore, the weights (importance of the measurements) are driven by known (or assumed) standard deviations of uncertainties in each data set: the smaller the standard deviation of measurement fitting is required in the retrieval, the bigger “weight” of such measurements can be set up in the synergy. In the frame of the GRASP Multi-Term LSM concept, the detailed description of the data weighting Multi-Term LSM concept can be found in (Dubovik et al. 2004, Dubovik et al. 2021a), and brief discussion is also provided in Sections 2.2, 2.4, and Appendix A (Eqs.(3A)-(12A)) of the revised manuscript.
Table 1 It would be nice to add the hyperspectral resolution for TROPOMI
Response:
Thank you for pointing this out. The spectral resolutions are added in Table 1.
Page 7 section 2.1. It is a little unclear what exactly is being done with spectral ‘harmonization’. Is it as simple as just adding all spectral channels to the measurement vector? Or is something more being done? I feel like this step should have radiometric harmonization as well, ie removing biases between measurements.
Response:
Speaking about spectral “harmonization,” we assumed the spectral band selection in synergy from different instruments. This is corrected in the text to avoid misunderstanding. Checking for biases is part of the optimization remote sensing test (ii) described in Section 2.4 in the revised manuscript. Here, we apply the GROSAT/GRASP approach, performing AERONET/satellite synergetic retrieval, and compare the derived surface products. Depending on the result of the GROSAT/GRASP test, the decision about instrument “bias” is taken (Section 2.2 and 2.4, Appendix A).
Page 8, line 198-199. The method of weighting is explained as “realized with application of the different requirements on the standard deviation of measurements fitting for the different spectral bands”. This seems like an important description of what weighting is, but I don’t understand the language. “Standard deviation” is mentioned several times but I have no idea what this is at standard deviation of? My closest guess is that it has something to do with minimizing the difference between observations and AERONET data. If that is the basis for deriving these weights it needs to be described in far more detail, since the specifics of which AERONET data were used could drive your results. Also – what does it mean to ‘exchange measurements between weighting groups’? This is poorly explained.
Ultimately, I cannot say with any confidence that I understand how you are weighting the instruments.
Response:
Indeed, the “weighting” refers to the importance of the different sets of measurements or a priori data in the global fitting of all input data by the forward models. In brief, in the frame of Multi-Term LSM (Dubovik et al., 2011, 20021a), the “weights” corresponds to Lagrange multipliers that are defined and the squared ratios of standard deviations of the standard deviation of uncertainties in each data set to the standard deviation of uncertainties in selected (“main) data set (i.e. in the radiances of the first satellite). Therefore, determining or assuming the standard deviation of uncertainties in each data set allows for defining the importance of the considered measurements and a priori data. The quantitative description of the “weighting” is described in Sections 2.2, 2.4, and Appendix A of the revised manuscript with reference to (Dubovik et al., 2011, 20021a) for more details..
Page 9, table 3 (and text). In some cases, you defined the temporal threshold in terms of hours, to which I presume means the associated parameter is held constant in that time period. Does this mean that beyond the time period there is no constraint at all? I am also attempting to reconcile this with the numerical smoothness constraints which are also provided. Additionally, I struggle to connect those values with physical reality – where do they come from? How do you justify the choice of values in the ‘relaxed’ case? Shouldn’t these be based on some analysis of aerosol temporal and spatial variability, such as Alexandrov et al 2004 or Shinozuka et al 2010 (or something more recent).
Alexandrov, M. D., Marshak, A., Cairns, B., Lacis, A. A., and Carlson, B. E.: Scaling Properties of Aerosol Optical Thickness Retrieved from Ground-Based Measurements, J. Atmos. Sci., 61(9), 1024--1039 , 2004.
Shinozuka, Y., Redemann, J., Livingston, J., Russell, P., Clarke, A., Howell, S., Freitag, S., O'Neill, N., Reid, E., Johnson, R., and others: Airborne observation of aerosol optical depth during ARCTAS: vertical profiles, inter-comparison, fine-mode fraction and horizontal variability, Atmos. Chem. Phys. Discuss., 10, 18315-18363 , 2010.
Response:
The developed retrieval relies on a priori constraints of the temporal variability of retrieved aerosol and surface reflectance parameters. These constraints are introduced as an a priori estimate of first derivatives (approximated by the finite differences) with respect to time. The means of these estimates were assumed as normally distributed values with zero means and standard deviations assigned based on the known variability of each parameter. Actually, the use of the limitations on finite differences when the limits are applied to differences between the parameters divided by the time period when they were observed is a very convenient way to ensure flexibility in applying time constants. Indeed, such constraints allow much larger variability for parameters corresponding to more distant in time observations compared to the parameters from observations that are very close in time. However, in cases when observations are nearly simultaneous or very close in time, the finite differences may have very large values that can produce an imbalance in practical fitting. To avoid such difficulties, we used the temporal threshold, which considerably constrains the temporal variability within a specified period of time (threshold). Full description can be found in (Dubovik et al., 2011, 20021a). It should be noted that in Section 2.4, we emphasize that the values provided in Table 4 are suggested based on rather general considerations of how each parameter can vary in time (how high the temporal derivatives can be), and in practice, the exact choice of the parameters is usually defined based on sensitivity studies and retrieval trials with real data. Moreover, these values are not really unique since quite similar retrieval performance of aerosol and surface properties can be obtained within a certain range of the constraints. Nevertheless, the chosen parameters (“weights”) are generally within an optimal range in the sense that they adequately reflect the tendencies in temporal dependencies of aerosol and surface properties adapted to the information content in the LEO+LEO and LEO+GEO synergies. For example, in the case of LEO+LEO synergies, the time constraints on the retrieved aerosol properties are nearly irrelevant, while for the case of LEO+GEO synergies, such constraints are very efficient and need to be carefully defined.
Figures 2 and 3 (although these comments apply in a similar nature to many other figures). What is one, in a broad sense, supposed to understand from these six plots? The text says ‘one can observe essential improvement’ from them. I strongly disagree. All six look very very similar. Perhaps the numerical statistical metrics written on each plot, but these are barely described. Which metric should we use? What specifically has improved from one plot to the other?
Response:
Thank you for pointing out this issue. We have comprehensively revised the presentation of figures in the manuscript, and include tables to summarize the key statistical metrics for intercomparison. A table (Table A1) is used to compare Figures 2 and 3 (before and after harmonization). For example, the fulfilment of GCOS requirement is improved from 34% to 48%, and bias for low AOD improved from +0.06 to +0.01.
Table A1. Summary of all-instrument AOD accuracy statistics for the two tests (“Before”, “After”) in Figures 2 and 3.
GCOS(%)
Bias (AOD)
Bias (AOD<0.2)
R
RMSE
slope
intercept
Figxx(a) (“Before”)
34%
0.04
0.06
0.89
0.16
0.88
0.08
Figxx(b) (“After”)
48%
-0.02
0.01
0.90
0.14
0.88
0.02
I realize that many algorithm developers in our community use scatterplots such as these to illustrate the success (or otherwise) of a given algorithm. The truth is that they are not appropriate, and figures 2 and 3 are a very good example of why. For starters, you are representing a parameter which is lognormally distributed on axes that are not, and the maximum value of the range is far larger than the majority of the data. So, you have most of the data represented in a tiny corner of the plot. It is impossible to see differences.
Furthermore you have plotted a linear regression to the data (why?) and there is an unexplained grey shaded areas which I presume are GCOS boundaries. The parameters of the linear fit, as well as the R2 value, are meaningless to explain what you are attempting to show, which is how well the GRASP retrieved AOD can represent the AERONET AOD.
Here’s how I would do this: use a mean bias plot (also known as a Tukey or Bland-Altman plot). Consider the data as pairs of corresponding GRASP and AERONET AOD. On the x-axis, plot the mean of each pair (AOD_grasp + AOD_aeronet)/2. Use a log scale for this axis. For the y-axis, plot the bias AOD_grasp-AOD_aeronet. Use a linear scale for this axis. This will expand the plotted area of interest and make it clear if there is a bias or any scale dependence. The y axis scatter will express differences in the unit that matters. Among the statistic metrics, I think the percentage fitting within GCOS thresholds is best (since they scale with AOD), but this should be explained, including with what you expect the values to be.
Response:
Thanks for the suggestions! Yes, the gray shaded area in the figure represents the GCOS uncertainty boundaries. We have added this clarification to the figure caption for better understanding. To illustrate the scale dependence of mean bias before and after harmonization, instrument weighting, and retrieval setup optimization, the Tukey plot is presented in the supplement PDF file. Overall, the revised manuscript presents scatter plots and tables that compare the main statistical metrics more clearly than in the initial submission.
Tukey plot corresponding to fig.4. Each panel corresponds to the scatter plot and histogram in the same relative location in Fig.4. In a Tukey plot, the x-axis (log-scaled) is the mean of each SYREMIS AOD and AERONET AOD pair, y-axis is SYREMIS AOD minus AERONET AOD for the same pair of data.
Page 13, paragraph 1 – similar to above: the results are described as ‘high quality’. What is your threshold for ‘high quality’? Which parameters matter, and what do you expect them to be?
Response:
Thanks very much for the suggestions! In the revised manuscript, we have included tables throughout to present and compare the validation statistics more clearly and systematically. As shown in Table 6, the high-quality AOD retrievals from SYREMIS achieve approximately 54% within the GCOS uncertainty requirement over land, with a correlation coefficient (R) of ~0.89. In contrast, the S5P/TROPOMI single-instrument retrievals show ~48% within GCOS and an R value of ~0.85.
Figures 4-11 – are all these figures necessary? What are we showing with the TROPOMI or OLCI extracts? Can this be demonstrated with less figures? My above comments for the scatterplots apply. The histograms are good, but the bin size should be adjusted for the number of parameters – for example the ‘green’ high optical depth case is not meaningfully presented, and this applies to many other cases too. Some of the plots don’t have enough data to be meaningful (ie fig 6 e and f).
Response:
Thanks very much for the suggestions! The size bin of each histogram has been adjusted in the revised manuscript. And we revised the presentation of figures and used consistent descriptions of the TROPOMI or OLCI extracts from SYREMIS synergy, for example, SYREMIS LEO+LEO: S5p/TROPOMI, SYREMIS LEO+LEO: S3A/OLCI, in contrast with GRASP single instrument results: GRASP/TROPOMI and GRASP/OLCI.
Note in the updated validation plot above, the number of datapoints N in each plot is different compared to the number of datapoints in the original Figs. 2 and 3 in the first submission of the manuscript; this is because the updated plots were created with updated AERONET Level 2 products. The latest access date to AERONET is 2025 July 18, which is about 2 years after the creation of the original plots in the first version of the manuscript. “
-
AC4: 'Reply on RC4', Pavel Litvinov, 29 Aug 2025
Reviewer4 (The replies to the comments are marked in red colour in the Supplement PDF file)
Response:
Many thanks to the reviewer for his criticism. We agree that some aspects may not be clearly presented in the initial version of the manuscript. Therefore, the manuscript was revised, taking into account the reviewer's comments and criticism.
Overall, the following corrections were performed:
- More detailed description of the physical basis of the synergetic approach, such as data preparation for synergy, forward modelling, application of the GRASP retrieval algorithm for SYREMIS (SYnergetic REtrieval from multi-MISsion instruments) approach, and multi-pixel synergetic concept. The weighting and a priori constraints used in SYREMIS/GRASP are discussed in more detail in Section 2 and the new Appendix A. We also emphasized the differences and advantages in comparison to the existing methods (Sections 1, 2).
- The results are presented in more detail, showing the advantages of the synergetic approach (Sections 2.4 and 3). To avoid confusion, Section 3 was subdivided into 4 subsections:
- 3.1 SYREMIS/GRASP LEO+LEO synergy performance versus AERONET.
Here, we presented the validation results for all instruments in the synergy, as well as for the AOD, AE, and SSA extracted from the synergy for the specific time of TROPOMI, OLCI-A, and OLCI-B measurements. This allows us to demonstrate how retrieved parameters are aligned to each other in time. - 3.2 SYREMIS/GRASP LEO+LEO inter-comparison with GRASP single instrument retrieval over AERONET.
Here, we show the added value of the synergy in comparison to the single-instrument GRASP retrieval. The evaluation accounts for the following 4 criteria simultaneously:
a). Performance in AOD (the highest rank in the evaluation)
b). Performance in AE
c) Performance in SSA
d) Number of pixels passed the quality filtering criteria.
- 3.3 SYREMIS/GRASP LEO+GEO synergetic performance versus AERONET.
Here we presented the validation results for all instruments in the synergy, as well for the AOD, AE, and SSA extracted from the synergy for the specific time of TROPOMI, OLCI-A, OLCI-B, and AHI measurements. This allows us to demonstrate how retrieved parameters are aligned to each other in time in LEO+GEO synergy. Due to the huge volume of the new information in LEO+GEO synergy, a separate publication is under preparation on this topic
- 3.4 SYREMIS/GRASP aerosol and surface products global intercomparison.
Here we present the preliminary results of the global AOD and BRDF properties intercomparison, showing qualitative agreement but quite big quantitative differences between synergy and single instrument products. Preliminary discussion is provided, though a separate publication is under preparation on this topic.
To better show the performance of the SYREMIS/GRASP approach, the validation figures were changed, and the tables with statistical validation characteristics were added. The discussion of the synergetic results is considerably extended. With all these modifications, the advantages of the synergetic retrieval over a single instrument should be well seen.
3. The scientific discussion is substantially extended in the manuscript.
4. The paper was carefully reviewed, and the English language was improved.
Note in the updated validation plot above, the number of datapoints N in each plot is different compared to the number of datapoints in the original Figs. 2 and 3 in the first submission of the manuscript; this is because the updated plots were created with updated AERONET Level 2 products. The latest access date to AERONET is 2025 July 18, which is about 2 years after the creation of the original plots in the first version of the manuscript. “
-
AC3: 'Reply on RC4', Pavel Litvinov, 29 Aug 2025
-
CC1: 'Comment on egusphere-2025-1536', Feng Xu, 21 Jun 2025
Title: Synergetic Retrieval from Multi-Mission Spaceborne Measurements for Enhancement of Aerosol and Surface Characterization
Recommendation: Minor Revision
This manuscript by Litvinov et al. presents a comprehensive study on a synergistic aerosol and surface retrieval method using combined datasets from multiple satellite missions, including S5P/TROPOMI, S3A/B OLCI, and Himawari-8/AHI. The authors proposed an innovative methodology for harmonizing diverse measurements from different satellite platforms to better constrain aerosol and surface properties. The retrieval results are validated against AERONET observations and compared with products from other satellite sensors. The novelty of this work lies in its generalized retrieval framework and the demonstrated improvements in aerosol microphysical property retrievals.
Major strengths:
- The proposed algorithm builds upon a well-established heritage algorithm - GRASP, developed by Dubovik et al. (2011, 2004).
- The manuscript is well structured, technically robust, and presents extensive validation results.
- The authors reasonably demonstrate that synergizing LEO+LEO and LEO+GEO measurements can significantly enhance retrieval accuracy. This is probably the first paper to implement a unified retrieval scheme across such a broad constellation of multi-mission instruments.
Minor weaknesses and suggestions:
- The synergetic retrieval relies on the imposition of temporal constraints on the retrieval quantities. In this context, the initial setting of Lagrange multipliers is crucial. It would be helpful for the authors to include a table summarizing these values and cite one or two references describing their derivation—e.g., based on climatological estimates of the least unsmooth solution.
- What is the impact of re-gridding the measurements to a 0.09-degree spatial resolution on the measurement uncertainty? It might help to clarify whether this process reduces the random noise while potentially retaining or amplifying the biases.
- In the summary, the authors could briefly discuss potential future developments, such as a dynamic pixel-scale retrieval uncertainty estimation framework. This could follow the methodology proposed by Dubovik et al. (2004) for quantifying pixel-scale retrieval error and assessing the impact of aerosol model assumptions.
- Figure captions, labels, and legends are currently difficult to read. Please consider increasing the font size and improving clarity to enhance readability.
Overall, this is an important and timely contribution to the field of satellite aerosol remote sensing. I recommend acceptance after minor revisions that address the points above.
Citation: https://doi.org/10.5194/egusphere-2025-1536-CC1 -
AC6: 'Reply on CC1', Pavel Litvinov, 29 Aug 2025
(The replies to the comments are marked in red colour in the Supplement PDF file)
Response:
We are very grateful for the very supportive words of these studies and the time the reviewer spent reading the manuscript and providing valuable feedback.
Minor weaknesses and suggestions:
- The synergetic retrieval relies on the imposition of temporal constraints on the retrieval quantities. In this context, the initial setting of Lagrange multipliers is crucial. It would be helpful for the authors to include a table summarizing these values and cite one or two references describing their derivation—e.g., based on climatological estimates of the least unsmooth solution.
Response:
Many thanks for these suggestions. Section 2 was updated with a more detailed description of the GRASP algorithm, multi-pixel approach, and multi-pixel constraints.
- What is the impact of re-gridding the measurements to a 0.09-degree spatial resolution on the measurement uncertainty? It might help to clarify whether this process reduces the random noise while potentially retaining or amplifying the biases.
Response:
It is very difficult to estimate this impact since it can be different for different instruments. In particular, we don’t expect a big effect on TROPOMI since it already has quite coarse native resolution, and 0.09 degrees is very close to the TROPOMI native resolution in the SWIR range. At the same the effect on OLCI and AHI instruments can be bigger. This may also be one of the reasons why the “weights” for OLCI and AHI measurements were smaller than for TROPOMI after “optimization” tests. We have not discussed this in the paper since separate studies are required on this topic yet to get clear answers.
- In the summary, the authors could briefly discuss potential future developments, such as a dynamic pixel-scale retrieval uncertainty estimation framework. This could follow the methodology proposed by Dubovik et al. (2004) for quantifying pixel-scale retrieval error and assessing the impact of aerosol model assumptions.
Response:
These are good suggestions. Indeed, synergy provides a lot of advantages already, but it also raises a number of important remote sensing questions. The effect on error estimation is one of them.
- Figure captions, labels, and legends are currently difficult to read. Please consider increasing the font size and improving clarity to enhance readability.
Response:
Figures and captions were updated.
Overall, this is an important and timely contribution to the field of satellite aerosol remote sensing. I recommend acceptance after minor revisions that address the points above.
Response:
Many thanks. The reviewer’s support is very much appreciated.
-
AC6: 'Reply on CC1', Pavel Litvinov, 29 Aug 2025
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
532 | 92 | 31 | 655 | 13 | 26 |
- HTML: 532
- PDF: 92
- XML: 31
- Total: 655
- BibTeX: 13
- EndNote: 26
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1