the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Improved Mean Field Estimates of GEMS AOD L3 Product: Using Spatio-temporal Variability
Abstract. This study presents advancements in the processing of satellite remote sensing data, focusing mainly on Aerosol Optical Depth (AOD) retrievals from the Geostationary Environment Monitoring Spectrometer (GEMS). The transformation of Level 2 (L2) data, which includes atmospheric state retrievals, into higher-quality Level 3 (L3) data is crucial in remote sensing. Our contributions lie in two novel improvements to the processing algorithm. First, we improve the inverse distance weighting algorithm by incorporating quality flag information into the weight calculation. By assigning weights inversely proportional to the number of unreliable grids, the method can provide more accurate L3 products. We validate this approach through simulation studies and apply it to GEMS AOD data across various regions and wavelengths. The use of the quality flags in the algorithm can provide a more accurate analysis in remote sensing. Second, we employ a spatio-temporal merging method to address both spatial and temporal variability in AOD data, a departure from previous approaches that solely focused on spatial variability. Our method considers temporal variations spanning previous time intervals. Furthermore, the computed mean fields show similar spatio-temporal patterns to the previous studies, confirming that they can capture real-world phenomena. Lastly, utilizing this procedure, we compute the mean field estimates for GEMS AOD data, which can provide a deeper understanding of the impact of aerosols on climate change and public health.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(26780 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(26780 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-604', Won Chang, 06 Apr 2024
The manuscript proposes an improved version of the existing IDW method by incorporating two advancements: (i) The quality flags are incorporated as indicator variables. (ii) Incorporating the temporal variation, which leads to a better uncertainty characterization than the existing IDW method that uses spatial variation only. The method is applied to the GEMS AOD data. The two advancements are clearly noteworthy contributions and hence the manuscript is suitable for publication in EGUsphere. I have only a few minor comments.
1. In Section 3.1, the radius is set to be 0.1°, but the actual area covered by a circle with a 0.1° radius varies significantly depending on the latitude bands. Since the GEMS AOD data analyzed here cover a large spatial domain, this can introduce some unexpected artifacts. I am wondering if the authors have considered such issues.
2. In Section 3.2.2, I do not quite follow the motivation and the description of the regression-based method. If I understand correctly, the method is regressing the computed average variability \sigma_IDW on different radius values. If so, why is it stated that “the spatio-temporal variability in Equation (4) becomes small as the spatial or temporal distance between grids becomes larger”? Perhaps what is more relevant is the fact that average variability is highly correlated with the number of points used to compute the variability. Also, why is the second-order design matrix used rather than other possibilities?
3. Figure 4 shows that the IDW method clearly leads to oversmoothing. I think the authors need to discuss this issue.
Citation: https://doi.org/10.5194/egusphere-2024-604-RC1 -
AC1: 'Reply on RC1', Sangwook Kang, 17 May 2024
We are grateful to the reviewer for their insightful comments and thought-providing questions. In response to their feedback, we have prepared a comprehensive point-by-point explanation. Furthermore, we have made significant revision efforts to our manuscript, incorporating their suggestion to improve the overall quality and clarity of our revised manuscript. We trust that the editors and reviewers will find the revised manuscript to be significantly enhanced and more effectively conveys the significance of our research. Please see the attached file for our detailed point-by-point responses.
-
AC1: 'Reply on RC1', Sangwook Kang, 17 May 2024
-
RC2: 'Comment on egusphere-2024-604', Anonymous Referee #2, 17 Apr 2024
General comments
This manuscript introduces an improved algorithm to produce a Level-3 GEMS aerosol product. The main feature is the consideration of (a) quality flags and (b) spatio-temporal variability. The results presented in the manuscript look promising.
However, the manuscript could benefit from further clarification on why those two aspects (quality flag and variability) are being proposed to be considered in the outlined manner. Therefore, I recommend major revisions before publication.
Specifically, this study uses quality flags to weigh observations, not to filter them out. Since this is not a usual approach to employing quality flags, a justification is expected to be described somewhere in the manuscript. In the averaging step of the proposed algorithm, a quality flag representing the presence of clouds is chosen as a weighting factor. However, in the proceeding step, the algorithm filters out pixels with cloud fractions > 0.4 anyway. These two conflicting approaches raise the question of why not simply filter out all pixels with issues.
The manuscript says that spatio-temporal variability is crucial to consider but does not explicitly explain why. One possible reason could be that aerosols are supposed to vary smoothly in time and space, so abrupt variations can be regarded as anomalies. For example, some aerosol retrieval algorithms take advantage of that fact to segregate aerosols from clouds. It will benefit readers if some statements are added about why spatio-temporal variations matter in data filtering.
Lastly, the effectiveness of the proposed method is mostly assessed qualitatively and subjectively. Without further highlights, visual inspections of the entire GEMS domain maps do not tell much about the quality improvements compared to the simple averages.
My specific comments and technical corrections are written below.
Specific comments
- Line 18: GEMS is a satellite instrument (or sensor, payload), not the satellite itself. The satellite is GK-2B, as properly described by the authors in the later part. Therefore, I suggest a slight revision of the expression.
- Lines 43–44: How do you define “realistic”? I suggest using more objective terms here to describe the strength of the proposed algorithm.
- Line 85: The mathematical expressions at the end of this line (the ranges of x_i and y_i) imply that the algorithm uses a square, not a circle, in which case a “radius” is not a correct term. Please clarify.
- Equation (3): I understand that the lack of quantified AOD uncertainties led to using quality flags as weighting factors. However, this approach raises two major discussion points.
First, quality flags are usually meant to be used for simply “filtering out” data points rather than weighting them. I strongly recommend discussing why the authors chose the weighting method over the filtering method, as well as how the Level-3 data quality would be impacted by choosing the weighting method instead of the filtering method.
Second, this approach assumes that the issue represented by each bit of quality flag quantitatively has an equivalent impact on AOD uncertainty. That’s the implication of a simple summation of different quality flag bits (the equation in Line 116). In reality, however, the quantified impacts should be different between different issues. Please discuss this point in the manuscript. - Lines 126–127: Why is it crucial to consider spatio-temporal variability? Although the authors provided a reference (Kikuch et al., 2018), I strongly recommend describing it in this manuscript as well, at least briefly. Readers can easily have the following question: What happens if the variability is not accounted for?
The Introduction section also says that the spatio-temporal variability should be considered, without saying why. Given that the term “spatio-temporal” is even included in the manuscript title, its importance should be described somewhere. The Introduction might be one of the proper sections for it. - Equation (6): The inverse of the variability is used for the weighting factor. This approach assumes that the “variability” represents the “uncertainty,” as described in Lines 146 and 147. This approach can be justified because high variability is often due to cloud contamination.
However, what if the variability is real (physical)? For example, let’s say there was a wildfire that produced a lot of aerosols. In that case, the aerosol plumes can have large spatial and temporal variabilities, and they can be accurate and reliable. Could it be considered a limitation of the approach? Please discuss this aspect. - Line 149: The following sentence in the manuscript could benefit from a backup explanation: “Note that the spatio-temporal variability in Equation (4) becomes small as the spatial or temporal distance between grids becomes larger.” Why is that the case?
- Lines 186–187: The acronym RMSE appears without presenting the full name. The full name for the acronym RMSD is presented above in the manuscript, though. Also, the acronym MSE appears many times without the full name, starting from Line 213. Please give the full names, although the meanings of both RMSE and MSE are not hard to guess.
- Lines 190–191: Shouldn’t these opening lines be placed under Section 4, not 4.1? There is no mention of the “choice of quality flags” in this section (4.1).
- Line 192 (Step 1): There should be a description of each mathematical term.
Specifically, I have four questions: (a) What does the matrix X represent? (i.e., what are the elements?) (b) Why the number of elements should be 4900 X 2? (c) What is U? and (d) What is beta?
I recommend naming each variable first in an English expression and then introducing the corresponding mathematical expression. The former is missing for Step 1 in the current manuscript, although the following Steps have them. Regarding question (b), maybe the description of a 70 X 70 lattice in Step 2 can be relocated to an earlier part. - Line 195 (Step 2): What is A1? Also, what is the unit for the [0, 1] domain? (The second question also applies to (0, 1) and (1, 1) in Step 1.)
- Line 200 (Step 4): How can you adopt the missing pattern from a geolocated AOD map, when your grid units are not longitude and latitude? In other words, how can readers match the 70 X 70 lattice with the actual GEMS observation domain?
- Line 213: In which step this MSE value is calculated? Between what variables? Is it between the simulated AODs and the IDW results?
- Lines 218–219: If the spatial unit has always been the same within Section 4, this unit description should be placed in Section 4.1 before describing the detailed Steps. It’s not easy to fully understand the Steps in Section 4.1 without information on the unit (see also comment 11 above).
- Figure 4: The algorithm proposed in this study implicitly involves gap-filling (right panel). Although this fact is described in Lines 186 and 290, I suggest adding a sentence somewhere in Lines 37–44 in the Introduction, where the authors describe the proposed method in short. The reason for the suggestion is that gap-filling is not a capability that every Level-3 algorithm has. It would be helpful for readers to be aware of the characteristics at an early stage.
- Lines 227–231: I had a hard time following the quality flag experiments. How can simulation results have quality flags? If the authors simulated them along with AOD values, the process should be described in Section 4.1. Also, how realistic the simulated quality flags can be? As presented in Table 2, each bit has a physical implication. Do the simulated quality flags reflect the physical meaning?
- Figure 5: What does the y-axis represent? I suppose it’s MSE, but please consider presenting it in the figure. Also, what do the whiskers and horizontal bars represent? Please describe the definitions in the caption.
- Line 235: I suppose the implication here is that the presence of clouds and out-of-range SSA/AOD affect the Level 3 quality the most. To just echo comment 4, please discuss the advantage of keeping those pixels without filtering them out.
- Line 242: The pixel size of 7 km X 8 km is not relevant to this study. As stated in Line 52, the GEMS pixel size is 3.5 km X 7.7 km. Also, the first sentence of Section 5.1 fits more as a concluding sentence. Please consider simply moving the position of the opening sentence to the last part of the paragraph. Otherwise, the description of how you should choose the grid size makes a reader expect to see a different conclusion (e.g., possible changes in the grid size).
- Line 243: I recommend putting “sensor” or “instrument” after “A geostationary satellite” in this line (this comment aligns with comment 1).
- Line 253: It says that the algorithm masks data based on quality flags, but this section (5.2) does not give any information on what quality flags were used for that purpose. The criteria were given only for three physical variables, not quality flags. Also, the reference in line 254 (Choi et al., 2020) is for the cloud product, not the aerosol product. From which product does the algorithm extract the quality flags? Please consider elaborating on this matter.
Furthermore, this section (5.2) does not explicitly mention the use of the GEMS L2 cloud product. Most of the information on this use is given in Lines 63–68. Also, this section (5.2) says that the cloud fraction was used as a filtering parameter, but Lines 63–68 say that the algorithm uses the cloud “radiance” fraction. Please consider reorganizing and clarification. - Line 257: Is this consideration (i.e., filtering out unreliable data) applied to the simulation experiments presented in Section 4?
Bit 6 in the aerosol quality flag (the presence of clouds) and the threshold here (cloud fraction >= 0.4) have consistent implications. Why did the authors choose to filter out not the former but the latter?
Also, given possible overlaps between the two criteria, I imagine that the quality flag bit 6 doesn’t play a significant role. A discussion is needed on this point. - Line 266: The sentence “This procedure […] smoother output” seems to fit better as the concluding sentence of the section. Would you consider moving this sentence to a later part (after presenting the detailed interpretation first)?
- Section 5.4: This section hardly gives an evaluation of the results. Of course, the challenge can be justified as described in the manuscript. However, the manuscript says that the results are consistent with “springtime global distribution” and “trends from MODIS” without presenting any actual comparisons.
I’m not suggesting that the manuscript should present the comparisons with independent observations. Those comparisons would be more about the performance of the Level-2 aerosol retrieval algorithm, which is out of the scope of this study.
The main focus of this study is to involve quality flags and spatio-temporal variability for Level 3 processing. Therefore, I suppose the evaluation this study should present is a more detailed comparison with the simple averaging method, not with independent observations (e.g., MODIS). Although some of these comparisons are presented in Section 5.3, they are not sufficient to demonstrate the effectiveness of the proposed method.
I strongly recommend adding more detailed comparisons between panels (a) and (b) in Figs. 7 and 8. Maybe Sections 5.3 and 5.4 can be merged in this step.
Specifically, the following questions can be answered: How many percentages of missing values were recovered compared to the simple average? How can you show quantitatively that the proposed method provides smoother output? Basically, the arguments stated in Section 5.3 are not clearly backed up by only Figs. 7 and 8. - Line 289: By any chance, did you mean to say “levels” instead of “units” in the first sentence?
Technical corrections
- Line 68: Missing period at the end of the paragraph.
- Line 94: yields -> yield
- Lines 145, 149, 205, 211: Equation -> Eq.
- Line 146: Section 3.2.2 -> Section 3.2.1
- Line 192: an each element -> each element
- Line 282: 550nm -> 550 nm (with a space)
- Line 286: Please provide the full name of MODIS.
- Line 296: ADO -> AOD
Citation: https://doi.org/10.5194/egusphere-2024-604-RC2 -
AC2: 'Reply on RC2', Sangwook Kang, 17 May 2024
We are grateful to the reviewers for their insightful comments and thought-providing questions. In response to their feedback, we have prepared a comprehensive point-by-point explanation. Fur- thermore, we have made significant revision efforts to our manuscript, incorporating their suggestion to improve the overall quality and clarity of our revised manuscript. We trust that the editors and reviewers will find the revised manuscript to be significantly enhanced and more effectively conveys the significance of our research. Please see the attached file for the point-by-point responses.
- Line 18: GEMS is a satellite instrument (or sensor, payload), not the satellite itself. The satellite is GK-2B, as properly described by the authors in the later part. Therefore, I suggest a slight revision of the expression.
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-604', Won Chang, 06 Apr 2024
The manuscript proposes an improved version of the existing IDW method by incorporating two advancements: (i) The quality flags are incorporated as indicator variables. (ii) Incorporating the temporal variation, which leads to a better uncertainty characterization than the existing IDW method that uses spatial variation only. The method is applied to the GEMS AOD data. The two advancements are clearly noteworthy contributions and hence the manuscript is suitable for publication in EGUsphere. I have only a few minor comments.
1. In Section 3.1, the radius is set to be 0.1°, but the actual area covered by a circle with a 0.1° radius varies significantly depending on the latitude bands. Since the GEMS AOD data analyzed here cover a large spatial domain, this can introduce some unexpected artifacts. I am wondering if the authors have considered such issues.
2. In Section 3.2.2, I do not quite follow the motivation and the description of the regression-based method. If I understand correctly, the method is regressing the computed average variability \sigma_IDW on different radius values. If so, why is it stated that “the spatio-temporal variability in Equation (4) becomes small as the spatial or temporal distance between grids becomes larger”? Perhaps what is more relevant is the fact that average variability is highly correlated with the number of points used to compute the variability. Also, why is the second-order design matrix used rather than other possibilities?
3. Figure 4 shows that the IDW method clearly leads to oversmoothing. I think the authors need to discuss this issue.
Citation: https://doi.org/10.5194/egusphere-2024-604-RC1 -
AC1: 'Reply on RC1', Sangwook Kang, 17 May 2024
We are grateful to the reviewer for their insightful comments and thought-providing questions. In response to their feedback, we have prepared a comprehensive point-by-point explanation. Furthermore, we have made significant revision efforts to our manuscript, incorporating their suggestion to improve the overall quality and clarity of our revised manuscript. We trust that the editors and reviewers will find the revised manuscript to be significantly enhanced and more effectively conveys the significance of our research. Please see the attached file for our detailed point-by-point responses.
-
AC1: 'Reply on RC1', Sangwook Kang, 17 May 2024
-
RC2: 'Comment on egusphere-2024-604', Anonymous Referee #2, 17 Apr 2024
General comments
This manuscript introduces an improved algorithm to produce a Level-3 GEMS aerosol product. The main feature is the consideration of (a) quality flags and (b) spatio-temporal variability. The results presented in the manuscript look promising.
However, the manuscript could benefit from further clarification on why those two aspects (quality flag and variability) are being proposed to be considered in the outlined manner. Therefore, I recommend major revisions before publication.
Specifically, this study uses quality flags to weigh observations, not to filter them out. Since this is not a usual approach to employing quality flags, a justification is expected to be described somewhere in the manuscript. In the averaging step of the proposed algorithm, a quality flag representing the presence of clouds is chosen as a weighting factor. However, in the proceeding step, the algorithm filters out pixels with cloud fractions > 0.4 anyway. These two conflicting approaches raise the question of why not simply filter out all pixels with issues.
The manuscript says that spatio-temporal variability is crucial to consider but does not explicitly explain why. One possible reason could be that aerosols are supposed to vary smoothly in time and space, so abrupt variations can be regarded as anomalies. For example, some aerosol retrieval algorithms take advantage of that fact to segregate aerosols from clouds. It will benefit readers if some statements are added about why spatio-temporal variations matter in data filtering.
Lastly, the effectiveness of the proposed method is mostly assessed qualitatively and subjectively. Without further highlights, visual inspections of the entire GEMS domain maps do not tell much about the quality improvements compared to the simple averages.
My specific comments and technical corrections are written below.
Specific comments
- Line 18: GEMS is a satellite instrument (or sensor, payload), not the satellite itself. The satellite is GK-2B, as properly described by the authors in the later part. Therefore, I suggest a slight revision of the expression.
- Lines 43–44: How do you define “realistic”? I suggest using more objective terms here to describe the strength of the proposed algorithm.
- Line 85: The mathematical expressions at the end of this line (the ranges of x_i and y_i) imply that the algorithm uses a square, not a circle, in which case a “radius” is not a correct term. Please clarify.
- Equation (3): I understand that the lack of quantified AOD uncertainties led to using quality flags as weighting factors. However, this approach raises two major discussion points.
First, quality flags are usually meant to be used for simply “filtering out” data points rather than weighting them. I strongly recommend discussing why the authors chose the weighting method over the filtering method, as well as how the Level-3 data quality would be impacted by choosing the weighting method instead of the filtering method.
Second, this approach assumes that the issue represented by each bit of quality flag quantitatively has an equivalent impact on AOD uncertainty. That’s the implication of a simple summation of different quality flag bits (the equation in Line 116). In reality, however, the quantified impacts should be different between different issues. Please discuss this point in the manuscript. - Lines 126–127: Why is it crucial to consider spatio-temporal variability? Although the authors provided a reference (Kikuch et al., 2018), I strongly recommend describing it in this manuscript as well, at least briefly. Readers can easily have the following question: What happens if the variability is not accounted for?
The Introduction section also says that the spatio-temporal variability should be considered, without saying why. Given that the term “spatio-temporal” is even included in the manuscript title, its importance should be described somewhere. The Introduction might be one of the proper sections for it. - Equation (6): The inverse of the variability is used for the weighting factor. This approach assumes that the “variability” represents the “uncertainty,” as described in Lines 146 and 147. This approach can be justified because high variability is often due to cloud contamination.
However, what if the variability is real (physical)? For example, let’s say there was a wildfire that produced a lot of aerosols. In that case, the aerosol plumes can have large spatial and temporal variabilities, and they can be accurate and reliable. Could it be considered a limitation of the approach? Please discuss this aspect. - Line 149: The following sentence in the manuscript could benefit from a backup explanation: “Note that the spatio-temporal variability in Equation (4) becomes small as the spatial or temporal distance between grids becomes larger.” Why is that the case?
- Lines 186–187: The acronym RMSE appears without presenting the full name. The full name for the acronym RMSD is presented above in the manuscript, though. Also, the acronym MSE appears many times without the full name, starting from Line 213. Please give the full names, although the meanings of both RMSE and MSE are not hard to guess.
- Lines 190–191: Shouldn’t these opening lines be placed under Section 4, not 4.1? There is no mention of the “choice of quality flags” in this section (4.1).
- Line 192 (Step 1): There should be a description of each mathematical term.
Specifically, I have four questions: (a) What does the matrix X represent? (i.e., what are the elements?) (b) Why the number of elements should be 4900 X 2? (c) What is U? and (d) What is beta?
I recommend naming each variable first in an English expression and then introducing the corresponding mathematical expression. The former is missing for Step 1 in the current manuscript, although the following Steps have them. Regarding question (b), maybe the description of a 70 X 70 lattice in Step 2 can be relocated to an earlier part. - Line 195 (Step 2): What is A1? Also, what is the unit for the [0, 1] domain? (The second question also applies to (0, 1) and (1, 1) in Step 1.)
- Line 200 (Step 4): How can you adopt the missing pattern from a geolocated AOD map, when your grid units are not longitude and latitude? In other words, how can readers match the 70 X 70 lattice with the actual GEMS observation domain?
- Line 213: In which step this MSE value is calculated? Between what variables? Is it between the simulated AODs and the IDW results?
- Lines 218–219: If the spatial unit has always been the same within Section 4, this unit description should be placed in Section 4.1 before describing the detailed Steps. It’s not easy to fully understand the Steps in Section 4.1 without information on the unit (see also comment 11 above).
- Figure 4: The algorithm proposed in this study implicitly involves gap-filling (right panel). Although this fact is described in Lines 186 and 290, I suggest adding a sentence somewhere in Lines 37–44 in the Introduction, where the authors describe the proposed method in short. The reason for the suggestion is that gap-filling is not a capability that every Level-3 algorithm has. It would be helpful for readers to be aware of the characteristics at an early stage.
- Lines 227–231: I had a hard time following the quality flag experiments. How can simulation results have quality flags? If the authors simulated them along with AOD values, the process should be described in Section 4.1. Also, how realistic the simulated quality flags can be? As presented in Table 2, each bit has a physical implication. Do the simulated quality flags reflect the physical meaning?
- Figure 5: What does the y-axis represent? I suppose it’s MSE, but please consider presenting it in the figure. Also, what do the whiskers and horizontal bars represent? Please describe the definitions in the caption.
- Line 235: I suppose the implication here is that the presence of clouds and out-of-range SSA/AOD affect the Level 3 quality the most. To just echo comment 4, please discuss the advantage of keeping those pixels without filtering them out.
- Line 242: The pixel size of 7 km X 8 km is not relevant to this study. As stated in Line 52, the GEMS pixel size is 3.5 km X 7.7 km. Also, the first sentence of Section 5.1 fits more as a concluding sentence. Please consider simply moving the position of the opening sentence to the last part of the paragraph. Otherwise, the description of how you should choose the grid size makes a reader expect to see a different conclusion (e.g., possible changes in the grid size).
- Line 243: I recommend putting “sensor” or “instrument” after “A geostationary satellite” in this line (this comment aligns with comment 1).
- Line 253: It says that the algorithm masks data based on quality flags, but this section (5.2) does not give any information on what quality flags were used for that purpose. The criteria were given only for three physical variables, not quality flags. Also, the reference in line 254 (Choi et al., 2020) is for the cloud product, not the aerosol product. From which product does the algorithm extract the quality flags? Please consider elaborating on this matter.
Furthermore, this section (5.2) does not explicitly mention the use of the GEMS L2 cloud product. Most of the information on this use is given in Lines 63–68. Also, this section (5.2) says that the cloud fraction was used as a filtering parameter, but Lines 63–68 say that the algorithm uses the cloud “radiance” fraction. Please consider reorganizing and clarification. - Line 257: Is this consideration (i.e., filtering out unreliable data) applied to the simulation experiments presented in Section 4?
Bit 6 in the aerosol quality flag (the presence of clouds) and the threshold here (cloud fraction >= 0.4) have consistent implications. Why did the authors choose to filter out not the former but the latter?
Also, given possible overlaps between the two criteria, I imagine that the quality flag bit 6 doesn’t play a significant role. A discussion is needed on this point. - Line 266: The sentence “This procedure […] smoother output” seems to fit better as the concluding sentence of the section. Would you consider moving this sentence to a later part (after presenting the detailed interpretation first)?
- Section 5.4: This section hardly gives an evaluation of the results. Of course, the challenge can be justified as described in the manuscript. However, the manuscript says that the results are consistent with “springtime global distribution” and “trends from MODIS” without presenting any actual comparisons.
I’m not suggesting that the manuscript should present the comparisons with independent observations. Those comparisons would be more about the performance of the Level-2 aerosol retrieval algorithm, which is out of the scope of this study.
The main focus of this study is to involve quality flags and spatio-temporal variability for Level 3 processing. Therefore, I suppose the evaluation this study should present is a more detailed comparison with the simple averaging method, not with independent observations (e.g., MODIS). Although some of these comparisons are presented in Section 5.3, they are not sufficient to demonstrate the effectiveness of the proposed method.
I strongly recommend adding more detailed comparisons between panels (a) and (b) in Figs. 7 and 8. Maybe Sections 5.3 and 5.4 can be merged in this step.
Specifically, the following questions can be answered: How many percentages of missing values were recovered compared to the simple average? How can you show quantitatively that the proposed method provides smoother output? Basically, the arguments stated in Section 5.3 are not clearly backed up by only Figs. 7 and 8. - Line 289: By any chance, did you mean to say “levels” instead of “units” in the first sentence?
Technical corrections
- Line 68: Missing period at the end of the paragraph.
- Line 94: yields -> yield
- Lines 145, 149, 205, 211: Equation -> Eq.
- Line 146: Section 3.2.2 -> Section 3.2.1
- Line 192: an each element -> each element
- Line 282: 550nm -> 550 nm (with a space)
- Line 286: Please provide the full name of MODIS.
- Line 296: ADO -> AOD
Citation: https://doi.org/10.5194/egusphere-2024-604-RC2 -
AC2: 'Reply on RC2', Sangwook Kang, 17 May 2024
We are grateful to the reviewers for their insightful comments and thought-providing questions. In response to their feedback, we have prepared a comprehensive point-by-point explanation. Fur- thermore, we have made significant revision efforts to our manuscript, incorporating their suggestion to improve the overall quality and clarity of our revised manuscript. We trust that the editors and reviewers will find the revised manuscript to be significantly enhanced and more effectively conveys the significance of our research. Please see the attached file for the point-by-point responses.
- Line 18: GEMS is a satellite instrument (or sensor, payload), not the satellite itself. The satellite is GK-2B, as properly described by the authors in the later part. Therefore, I suggest a slight revision of the expression.
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
335 | 108 | 36 | 479 | 25 | 23 |
- HTML: 335
- PDF: 108
- XML: 36
- Total: 479
- BibTeX: 25
- EndNote: 23
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Sooyon Kim
Yeseul Cho
Hanjeong Ki
Seyoung Park
Dagun Oh
Seungjun Lee
Yeonghye Cho
Jhoon Kim
Wonjin Lee
Jaewoo Park
Ick Hoon Jin
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(26780 KB) - Metadata XML