the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Vicarious Calibration of the TROPOMI-SWIR module over the Railroad Valley playa
Abstract. The SWIR module of the TROPOMI instrument on board ESA's Sentinel-5p mission has been very stable during its five years in orbit. Calibration was performed on-ground, complemented by measurements during inflight instrument commissioning. The radiometric response and general performance of the SWIR module are monitored by onboard calibration sources. We show that after five years in orbit, TROPOMI-SWIR has continued to show excellent performance with degradation of at most 0.1 % in transmission and having lost less than 0.3 % of the detector pixels. Independent validation of the instrument calibration, via vicarious calibration, can be done through comparisons with ground-based reflectance data. In this work, measurements at the Railroad Valley Playa are used to perform vicarious calibration of the TROPOMI-SWIR measurements, using both dedicated measurement campaigns, as well as automated reflectance measurements through RADCALNET. As such, TROPOMI-SWIR is an excellent test case to explore the methodology of vicarious calibration applied to infrared spectroscopy. Using methodology developed for the vicarious calibration of the OCO-2 and GOSAT missions, the absolute radiometry of TROPOMI-SWIR performance is independently verified to be stable down to ~ 6–10 % using the Railroad Valley, both on the absolute and, thus, relative radiometric calibration. Differences with the onboard calibration originate from the BRDF effects of the desert surface, the large variety in viewing angles, and the different sizes of footprints of the TROPOMI pixels. However, vicarious calibration is shown to be an additional valuable tool in validating radiance-level performances of infra-red instruments such as TROPOMI-SWIR in the field of atmospheric composition.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1330 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1330 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-89', Anonymous Referee #1, 10 Mar 2023
Review of "Vicarious Calibration of the TROPOMI-SWIR module over the
Railroad Valley Playa" by Tim A. van Kempen et al., submitted to AMT, 2023
This paper describes both the results of the on-board calibration/monitoring of the TROPOMI SWIR module, and of independent vicarious ground-based calibration (or should that be verification) of the (absolute) radiometric accuracy and stability. It is shown from the on-board diagnostics that the SWIR module is extremely stable and suffers very little pixel loss. In a second theme, the ground-based vicarious calibration is described with details about the various required correction steps to make ground-based and satellite data truly comparable, although several limits are shown to be unavoidable (at this location, with this satellite sounder). In spite of the care taken to address these corrections, the vicarious method is shown to provide an independent but much less stringent assessment of the radiometric stability of the SWIR module.The paper is very well written: clear language, logical structure, clear supporting figures (with too small lables) and the underlying work is comprehensive and to-the-point. The part on vicarious calibration using the Railroad Valley Playa is also a first for the TROPOMI mission and as such innovative. Besides a few minor comments listed below, my only gripe is with the conclusions in the abstract regarding the vicarious calibration: I think it may be made more clear that the vicarious calibration following the current state-of-the-art has severe limitations for an instrument such as TROPOMI and cannot compete with onboard calibration. Or phrased differently: onboard tools to allow the latter are still a must (as you state in the conclusions section).
As said, please find below some more minor remarks/questions that arose during my 1st reading of the paper (some may be answered further on in the paper, but that wasn't clear at 1st sight). I list these as they may help the authors to fine-tune the reading experience.
=====================
Abstract:
- You speak about vicarious calibration, but is it actually calibration, i.e. are correction factors derived and applied, or is it only verification?
- The last sentence about added value in the vicarious calibration w.r.t. the onboard calibration is very vague. Probably because your end conclusion is actually rather the opposite: that there is hardly any added value (if there are on-board calibration means)?Introduction:
- line 11: perhaps specify that this is about the SWIR detector as the numbers (in particular the resolution on the ground) are different for the UVN module.
- some unexplained acronyms (RADCALNET, JPL LED), also happens in other locations in the paper.Section 2.1:
- line 1: from on-ground measurements -> from pre-flight on-ground measurements
- Transmission: this is meant as both transmission along the optical path and detector sensitivity?Section 2.3:
- line 1 on p8: pixels -> pixel's?Section 3.1:
- is this poorer spectral resolution of RADCALNET (10nm) still fine enough to really probe only the continuum you're aiming for?Section 3.2:
- You write: "This variation is dominated by the varying viewing angles, represented by the Bidirectional Reflection Distribution Function (BRDF) of the
non-Lambertian desert surface." How do you know? Is there a strong correlation with viewing angles but not solar angles or e.g., aerosol load?Section 4.1.2:
- Aerosols are not considered in RemoteC?Section 4.1.3:
- Please justify the use of a linear extrapolation (I guess this is what you expect for Mie scattering at these wavelengths)?Section 4.2.1:
- I'm not sure the full mathematical description is of the model is needed here, but if you provide it, it would be nice to have some description of what the different parameters physically mean (e.g. r_0,k,b), if they can be given an intuitive meaning.Section 4.3.2:
- What causes the remaining variability, besides the SZA evolution?Section 5.1:
- Fig. 9: I don't think I understand the units of the y-axis, the range seems huge for a multiplicative correction factor. How is this correction factor applied?
- General comment on figures throughout the paper: many labels are rather small. E.g., in Figure 12.Section 6.2.1:
- Maybe I missed it, but would it not be an interesting exercise to apply this (probably too strict) TROPOMI CH4 cloud filter? You'll lose a lot of data, but at least cloud contamination should be minimal.Section 6.3:
- The first sentence sounds a bit overconfident. I guess you mean "For TROPOMI-SWIR, vicarious calibration cannot be improved upon from the RRV analysis presented in this work."
- 1st paragraph: so a more detailed mapping of the BRDF over the entire RRV (allowing you to drop the homogeneity assumption) would be a great step forwards? Then again, looking at Figure 2, it seems many pixels even reach outside the valley itself.
- At some point, true horizontal sensitivity over the pixel will probably also be an issue (i.e., it not being top-hat like but rather a super-Gaussian of some sort?).Citation: https://doi.org/10.5194/egusphere-2023-89-RC1 -
AC1: 'Reply on RC1', Tim van Kempen, 16 Jun 2023
>>
Dear referee,Thank you for the kind words. Please find below responses that complement the changes that were put in the new version of the paper. Indeed, the usage of onboard calibration should be highlighted even more.
I also want to apologise for the very long delay in this reaction, and subsequently thank you for your patience. Due to the absence of a second referee, this response has been sitting on my harddrive a long time.with kind regards,
Tim
=====================
Abstract: - You speak about vicarious calibration, but is it actually calibration, i.e. are correction factors derived and applied, or is it only verification?> Indeed, as applied to TROPOMI-SWIR, is it only a verification. Although correction factors could have been derived (with large uncertainties), these are never applied to TROPOMI-SWIR ready. However, to introduce confusing differences with existing literature in this field, we elected to keep the term 'Vicarious Calibration'
- The last sentence about added value in the vicarious calibration w.r.t. the onboard calibration is very vague. Probably because your end conclusion is actually rather the opposite: that there is hardly any added value (if there are on-board calibration means)?
> We added an additional sentence. In our opinion there is added value. Despite the very large uncertainties, vicarious calibration remains an independent verification. In addition, it also reveals how design choices of new instruments must be made to make more effective use of vicarious calibration.
Introduction: - line 11: perhaps specify that this is about the SWIR detector as the numbers (in particular the resolution on the ground) are different for the UVN module.
> corrected
- some unexplained acronyms (RADCALNET, JPL LED), also happens in other locations in the paper.> corrected
Section 2.1: - line 1: from on-ground measurements -> from pre-flight on-ground measurements
> corrected
- Transmission: this is meant as both transmission along the optical path and detector sensitivity?> This was not our intent, but effectively, you are correct. This is due to the ordering of sections 2.1, 2.2 and 2.3. By changing the order (2.1 is now the last), this ambiguity has been removed. The transmission should only aim towards the optical path. The detector sensitivity is in part covered by the dark flux and degradation.
Section 2.3: - line 1 on p8: pixels -> pixel's?> corrected
Section 3.1: - is this poorer spectral resolution of RADCALNET (10nm) still fine enough to really probe only the continuum you're aiming for?
> No it is not. This is discussed in 4.4.1. Apart from the broad response of 10 nm, the shape of the response (triangular) also poses issues.
Section 3.2: - You write: "This variation is dominated by the varying viewing angles, represented by the Bidirectional Reflection Distribution Function (BRDF) of the non-Lambertian desert surface." How do you know? Is there a strong correlation with viewing angles but not solar angles or e.g., aerosol load?
> First, it should be that it is dominated by varying viewing AND solar angles. Thank you for catching this error.
This is an effective conclusion from the pair of papers from Bruegge et al., 2019a and 2019b.
Although aerosol load could play a major role, the effective Aerosol optical depth (AOD) at 2.3 micron above RRV is very low. Even with large AOD factors in the visible, the term remains relatively neglible at 2.3 micron.
Section 4.1.2: - Aerosols are not considered in RemoteC?> yes it is. Changed the text to reflect it.
Section 4.1.3: - Please justify the use of a linear extrapolation (I guess this is what you expect for Mie scattering at these wavelengths)?
> changed
Section 4.2.1: - I'm not sure the full mathematical description is of the model is needed here, but if you provide it, it would be nice to have some description of what the different parameters physically mean (e.g. r_0,k,b), if they can be given an intuitive meaning.> this is explained in Bruegge et al., 2019a, but more thoroughly in the Rahman papers from 1993. Conceptually it is not easy to relate the parameters to physical phenomena.
Section 4.3.2: - What causes the remaining variability, besides the SZA evolution?> unknown. we hypothesized even small variations in elevations (or even surface 'roughness') causes small-scale shadowing. But there is no evidence to support this hypothesis, so it was decided to leave this out of the paper. Similarly, we discussed whether or not a very small component of the total emission is not due to the reflection, but due to thermal emission of a hot surface. at 300-320 K, blackbody radiation at 2.3 micron is nonzero. But we have no surface temperature measurements to confirm/disprove this.
Section 5.1: - Fig. 9: I don't think I understand the units of the y-axis, the range seems huge for a multiplicative correction factor. How is this correction factor applied?
> This is indeed a multiplicative factor that is applied on the raw spectrum. This is never shown as the resulting spectrum are the blue spectra in Fig. 11. Some extreme values were not applied, although some extreme values indeed show a difference of almost an order of magnitude in the resulting comparison similar to Fig. 11 (i.e. a simulated spectrum would be 7 times brighter than the observed).
- General comment on figures throughout the paper: many labels are rather small. E.g., in Figure 12.> We tried changing the fonts and font sizes, but were relatively unhappy with the results.
Section 6.2.1: - Maybe I missed it, but would it not be an interesting exercise to apply this (probably too strict) TROPOMI CH4 cloud filter? You'll lose a lot of data, but at least cloud contamination should be minimal.
> Yes, this was done, with relatively poor results. Due to the high albedo in general, the fits of CH4 above whiter deserts such as RRV (and similarly salt flats) do not always converge, even in known cloudless conditions. You indeed get very very few reliable points (10 per year)
Section 6.3: - The first sentence sounds a bit overconfident. I guess you mean "For TROPOMI-SWIR, vicarious calibration cannot be improved upon from the RRV analysis presented in this work."> changed
- 1st paragraph: so a more detailed mapping of the BRDF over the entire RRV (allowing you to drop the homogeneity assumption) would be a great step forwards? Then again, looking at Figure 2, it seems many pixels even reach outside the valley itself.
> Indeed, after submission we looked at this using VIIRS BRDF data interpolated/modelled, but did not get an improvement.- At some point, true horizontal sensitivity over the pixel will probably also be an issue (i.e., it not being top-hat like but rather a super-Gaussian of some sort?).
> very likely the spatial response function (i.e. the horizontal sensitivity) already plays a role. But we are unable to assess how big. We theorized that the systematic increase seen in the median of the mRPV model comparisons of 2-3 % (Table 4) might be due to this effect. However, with the larger uncertainties of all the other effects, we cannot, and likely will never be able to, discern between statistical effects and the spatial response of a pixel.
Citation: https://doi.org/10.5194/egusphere-2023-89-AC1
-
AC1: 'Reply on RC1', Tim van Kempen, 16 Jun 2023
-
RC2: 'Comment on egusphere-2023-89', Anonymous Referee #2, 05 Jun 2023
Review of "Vicarious Calibration of the TROPOMI-SWIR module over the Railroad Valley Playa" by Tim A. van Kempen et al., submitted to AMT, 2023
The manuscript with title “Vicarious Calibration of the TROPOMI-SWIR module over the Railroad Valley Playa” presents an innovative work and it would worth being published in AMT because of its novelty on applying vicarious calibration technique for the first time on one TROPOMI band. Vicarious calibration refers to the technique of making use of natural sites on the Earth’s surface, like the Railroad Valley Playa, to post-launch calibration of satellite sensors. The findings of the authors are important primarily because they verify instrument’s stability from the beginning of the nominal operation after the commissioning phase until end of 2021. TROPOMI proves to be an instrument with high quality of measured radiances in the SWIR spectral band based on the analysis that has been performed in this manuscript. Moreover, it could be a starting point for other researchers to try this approach for the UVN spectral bands of TROPOMI too. It is indeed proved to be an additional valuable tool in validating radiance-level performances for TROPOMI channels as stated in the abstract.
I found the manuscript well-written, easy for the reader to follow the methodology and results. The English language is used in a clear way. I had minor troubles to understand or figure out what the authors want to say in few points. For those points where I had problems to follow the text, I would ask the authors to rephrase the sentences and explain in more detail what has been done. In some parts of the manuscript, I found that the sentences are too direct and slightly inappropriate for a scientific paper. Sentences like “See Kleipool et al. (2018), van Kempen et al. (2019) and Ludewig et al. (2020).” (in line 21 of page 4) could be fine for a technical document or a validation report but certainly not so good for a peer-reviewed journal paper. The reader expects that the authors refer to specific references when presenting specific arguments and not only suggesting literature to the reader. Minor typos or small grammar mistakes should be corrected before publishing this manuscript. Captions from some Figures and Tables might also be revised because they are too short and some times not enough informative. For instance, in Table 3, the letter “F” which is often mentioned next to the MODIS and VIIRS ratios, should be explained in the caption of the Table. The readers might need to spend some time to find this information in the text.
My specific comments for improving the content and the overall quality of the manuscript are listed below separately for each section.
Abstract
- Page 1, Line 7: I would suggest to give a short description of the location Railroad Valley Playa in the abstract. Currently, there is only reference to the region but why this is special and appropriate region for vicarious calibration of TROPOMI-SWIR module is missing. I think this is something that the reader wants to know already when reading the abstract as it is mentioned in the title of this manuscript.
- Page 1, Line 10: I would suggest to provide references for the vicarious calibration of OCO-2 and GOSAT missions in the abstract.
- Page 1, Line 15: infra-red -> infrared without “-“.
Section 1 (Introduction)
- Page 2, Line 12: What happened with the 25 columns and 39 rows? What do the authors mean with the statement “about 975 columns and 217 rows are illuminated”?
Section 2 (TROPOMI-SWIR Performance)
- Page 4, Line 1: I would recommend to the authors to clarify what do they mean by “line-free”. Probably, they mean that there is no absorption from the atmospheric gases in this wavelength range, but it is not very clear.
- Page 4, Line 4: Please add “at nadir” after “7 x 5.5 km2”.
- Page 4, Line 5: The change in the integration time should have been effective since August 2019 and not 2020. Please cross-check this important date. Moreover, I would propose to rephrase the sentence as “effectively changing the spatial resolution in the along-track direction from …”.
- Page 4, Line 13: The authors refer to the instrument zenith angle (i.e., distance from the nadir pixel) and just one line above they refer to the viewing zenith angles. What is the difference that the authors imply between the two zenith angles? I would recommend them to give the definitions.
- Page 4, Line 21: As mentioned above, I would consider such way of writing a sentence like “See Kleipool et. (2018) …” not so formal for a scientific paper. Similar type of sentences present in few more points of the manuscript should be removed or restructured.
Section 2.1 (Transmission stability)
- Page 5, Line 5: I would recommend to delete the sentence “It is stable over the first year”.
Section 2.2. (Dark Flux)
- Page 7, Line 8: I suggest to replace “At the change in along-track spatial resolution” with the exact date.
Section 3 (Reference Data)
- Page 9, Line 10: I would suggest to replace “see 3.1” with “as described in Section 3.1”. I find the direct statement “see …” informal for the paper. I would recommend the same for “see 3.2” of line 11 and “see 3.3” of line 12.
- Page 9, Line 10: I suggest to write “… consists of ground measurements from dedicated campaigns” instead of “dedicated campaign ground measurements”.
Section 3.1 (RADCALNET)
- Page 9, Line 19: I would rephrase “…instrument suite which can provide data at several times during the TROPOMI overpasses”.
- Page 9, Line 22-23: The sentence “For more on the RADCALNET dataset, … on the website” seems informal to me. I would try to rephrase it.
Section 3.2 (RRV campaigns)
- Page 9, Line 27: ASD is an acronym which I cannot find in the text.
- Page 9, Line 28: I would change to “an area of 500 x 500 m2” instead of “500 meters by 500 meters”. In general, the sentence “The field measurements of …. TROPOMI at nadir” is a bit too long and complicated for me to follow it well.
- Page 10, Line 1: JPL is an acronym which is not introduced in the text.
- Page 10, Line 7: The word “from” is missing -> “varying from day to day”.
Section 3.3 (Ancillary data)
- Page 11, Line 4: Where is mRPV defined in the text earlier? I only see a reference in Line 8 but the acronym needs to be defined in the point that it is used for the first time in the text.
Section 3.3.1 (MODIS)
- Page 11, Line 17: I would recommend to rephrase the sentence “As such it is … in the next section”.
Section 3.3.3 (MISR)
- Page 12, Line 3: I would recommend to rephrase the sentence “Similar to the MODIS … to be irregular”.
Section 4 (Correction methodology)
- Page 12, Line 12: I would add “in Section 4.1”.
Section 4.1.2 (RemoteC)
- Page 13, Line 2: I would rephrase the first sentence.
Section 4.1.3 (Aerosol Optical Depth)
- Page 13, Line 11: I would rephrase the first sentence and replace “changes from day to day” with “shows a daily variation”.
- Page 13, Line 14: AERONET with capital letters.
Section 4.2 (BRDF normalization)
- Page 13, Line 17: There is a typo. Replace “byt” with “by”.
Section 4.2.1 (mRPV)
- Page 14, Figure 6: I cannot really see any pattern or a seasonal variation for the free parameters n the mRPV model. This figure is not so informative from my point-of-view.
- Page 14, Lines 4-14: I would probably move all the mathematical formulas in an appendix. As far as I understand, the formulas are not originally derived for this study. Therefore, the reference to the source and the description in an appendix would be more optimal for me. Moreover, there are no explanations about the free parameters r0, k and b. What do they represent? What does “hot spot” actually mean?
Section 4.3.1 (Spatial Averaging)
- Page 15, Line 24: “Data” is plural -> Change from “Most data is available” to “Most data are available”.
- Page 16, Line 6: “We realize this introduces …” -> “We realize that this introduces …”
- Page 16, Line 7: By how much is the error? An estimate should be given.
Section 4.3.2 (Time differences and Solar angle)
- Page 16, Line 9: “Variations also exist as a function of the time of day.” What do the authors mean with this statement? It is not clear to me.
- Page 16, Line 20: Please replace the “2nd-degree polynomial fit” with “2nd-order polynomial fit”.
Section 4.4.1 (RADCALNET)
- Page 17, Line 19: I didn’t understand how this multiplication factor of 1.247 was derived.
Section 4.4.2 (Ground campaigns)
- This section seems incomplete. Could the authors elaborate more on how the radiances are derived from the reflectances?
- Page 18, Table 2: The third column refers to IZA and in the caption TROPOMI-SWIR viewing angle is mentioned. I would recommend to keep consistency for the angles IZA and VZA. This comment is relevant to one of my former comments in a previous section.
- Page 19, Figure 9: Similarly, to the previous comment, do the authors prefer to show IZA in the label of x-axis? There is also a small inconsistency in the marker; “red triangle” should be replaced by “red diamond”.
- Page 19, Lines 5-8: I would like to ask the authors to comment further on the uncertainties which are introduced due to the assumption of spectral dependence absence.
Section 5 (Results)
- Page 20, Line 13: Elaborate more on the statement “the accuracy of the mRPV model”. How should the reader interpret this? mRPV model is not accurate enough at larger VZA?
Section 5.2 (TROPOMI-SWIR vs RadCalnet)
- In the text many times RADCALNET is writren with capital letters. Here the acronym is written as RadCalnet. I would recommend to keep consistency of the acronyms.
- Page 22, Line 2: “Structural trends in the residuals are seen for the Lambertian model, but not the mRPV model.” I cannot see these trends. I would ask the authors to elaborate more on it.
Section 5.3 (TROPOMI-SWIR vs dedicated campaigns)
- Page 23, Line 5: Where is MIPREP acronym defined?
- Page 23, Line 22: I would replace “There does appear to” with “It appears to”.
- Page 24, Figure 12: There is a small mistake here. Please replace “VIIRS (left)” with “VIIRS (right)” in the caption. I would also like to see a statement related to the second sentence in the caption. It’s OK to omit ratios above 1.5 but it would be informative to specify how often those ratios were found. I would recommend to give a percentage for MODIS and VIIRS separately.
- Page 25, Table 3: Please include in the caption what “F” stands for. The readers would appreciate this information written here.
- Page 26, Table 4: Do the numbers refer to the mean and median ratios of Table 3 using all orbits? The authors should include this information in the caption.
Section 6.2.1 (Cloud Filtering)
- Page 27, Line 8-9: “Due to the albedo of cloud, TROPOMI-SWIR radiance may be different than estimated from the RADCALNET.” Please elaborate more on this and give an estimate of this effect.
Section 6.3 (Implications for TROPOMI)
- Page 27, Line 29: “As such, data on nadir-views remains inconclusive”. Probably more orbits with nadir-views should be investigated. A set of 5 orbits is a very limited sampling to draw any conclusions. Moreover, the word “data” is plural and there is a grammar issue; please correct “remains” to “remain”.
Section 7 (Conclusions)
- Page 28, Line 24: “Vicarious calibration limits are an order of magnitude larger (4-10%).” It is not clear to me what do the authors mean with this statement and to which findings/comparisons they refer to?
Citation: https://doi.org/10.5194/egusphere-2023-89-RC2 -
AC2: 'Reply on RC2', Tim van Kempen, 16 Jun 2023
>> Dear Referee,Thank you for the kind words and the suggestions. We have adopted them in the new version of the paper that has been submitted. Specific comments to specific suggestions are given below. If no comment is given, we have adopted your comments in full.
with kind regards,
Tim van Kempen
-----
Abstract
Page 1, Line 7: I would suggest to give a short description of the location Railroad Valley Playa in the abstract. Currently, there is only reference to the region but why this is special and appropriate region for vicarious calibration of TROPOMI-SWIR module is missing. I think this is something that the reader wants to know already when reading the abstract as it is mentioned in the title of this manuscript.
> included in the abstract.
Page 1, Line 10: I would suggest to provide references for the vicarious calibration of OCO-2 and GOSAT missions in the abstract.> The guidelines of the AMT journal dictates that references should not appear in the abstract. They are given in the introduction (Kuze et al., 2014, Bruegge et al., 2019a, 2019b and 2021)
Section 1 (Introduction)
Page 2, Line 12: What happened with the 25 columns and 39 rows? What do the authors mean with the statement “about 975 columns and 217 rows are illuminated”?
> The columns are not all illuminated due to a cover that is on the detector. The rows are not all illuminated due to the exit angle of the dispersing element.
Section 2 (TROPOMI-SWIR Performance)
Page 4, Line 5: The change in the integration time should have been effective since August 2019 and not 2020. Please cross-check this important date. Moreover, I would propose to rephrase the sentence as “effectively changing the spatial resolution in the along-track direction from …”.> thank you, this was indeed a typo.
Page 4, Line 13: The authors refer to the instrument zenith angle (i.e., distance from the nadir pixel) and just one line above they refer to the viewing zenith angles. What is the difference that the authors imply between the two zenith angles? I would recommend them to give the definitions.> these are the same, clarified.
Section 2.1 (Transmission stability), Section 2.2. (Dark Flux)> We would like to inform the referee that the order of sections 2.1, 2.2 and 2.3 has changed for clarity.
Section 3.2 (RRV campaigns)
Page 9, Line 28: I would change to “an area of 500 x 500 m2” instead of “500 meters by 500 meters”. In general, the sentence “The field measurements of …. TROPOMI at nadir” is a bit too long and complicated for me to follow it well.
> rephrased
Page 10, Line 1: JPL is an acronym which is not introduced in the text.
> in the current version it is used in the introduction.
Section 4.2.1 (mRPV)
Page 14, Figure 6: I cannot really see any pattern or a seasonal variation for the free parameters n the mRPV model. This figure is not so informative from my point-of-view.
> the main pattern is the correlation between the three parameters and the visualisation that the r_0, k and b parameters are not static for the desert surface.
Page 14, Lines 4-14: I would probably move all the mathematical formulas in an appendix. As far as I understand, the formulas are not originally derived for this study. Therefore, the reference to the source and the description in an appendix would be more optimal for me. Moreover, there are no explanations about the free parameters r0, k and b. What do they represent? What does “hot spot” actually mean?> This was discussed internally before submission, but decided against. The main motivation is that the comparison between the mRPV model and VIIRS/MODIS, including the different corrections, are a central point to the paper. It is our opinion that the lessons learned of the methodology as applied to the data is a more important conclusion that the final percentages derived from it. As such putting the mathematical formulae within the paper seemed prudent. During writing we kept going back to these often to understand how these relate to the corrections.
The relation of the free parameters themselves, as well as the hot spot, as applied to RRV are extensively discussed in Bruegge et al., 2019a, 2019b, in addition to the original papers from Rahman et al., 1993a, 1993b. These papers are also referenced. As opposed to the formulae, we feel that the references should be sufficient.
Section 4.3.1 (Spatial Averaging)
Page 16, Line 7: By how much is the error? An estimate should be given.> we do not know, but discuss this effect in the Section 6. In fact most of the uncertainty in the final result of this paper likely is related to the large pixel size and the spatial averaging steps. We rephrased the sentence to remove the ambiguity and refer to the uncertainties seen in earlier studies.
Section 4.3.2 (Time differences and Solar angle)
Page 16, Line 9: “Variations also exist as a function of the time of day.” What do the authors mean with this statement? It is not clear to me.
> sentence remove the ambiguity.
Section 4.4.1 (RADCALNET)Page 17, Line 19: I didn’t understand how this multiplication factor of 1.247 was derived.
> It is the average difference between all dates which have a cloudless ToA RADCALNET radiance at 2310 nm and a the 2313 continuum radiance seen by TROPOMI where the viewing angle is less than 3 degrees (i.e. nearly nadir-viewing). For such very small degrees, there is assumed to be no correction nBRDF needed. The difference is solely due to the absorption of methane and water in the RADCALNET bandwidth (~10 nm width triangular)
Section 4.4.2 (Ground campaigns)
This section seems incomplete. Could the authors elaborate more on how the radiances are derived from the reflectances?
> This is described in 4.1.2. RemoteC is used to calculate the radiative transfer of irradiance values through reflection on the RRV surface back to the TROPOMI entry.
Page 19, Lines 5-8: I would like to ask the authors to comment further on the uncertainties which are introduced due to the assumption of spectral dependence absence.
> added. This uncertainty is assumed to be very minor.
Section 5 (Results)
Page 20, Line 13: Elaborate more on the statement “the accuracy of the mRPV model”. How should the reader interpret this? mRPV model is not accurate enough at larger VZA?
> Yes. The model was build upon many measurements of the Earth in the 90s, none of which had angles larger than 40-45 degrees at the time. We do note that the model significantly outperforms both the MODIS and VIIRS data products at these larger angles.
Section 5.2 (TROPOMI-SWIR vs RadCalnet)
Page 22, Line 2: “Structural trends in the residuals are seen for the Lambertian model, but not the mRPV model.” I cannot see these trends. I would ask the authors to elaborate more on it.> these trends are seen in the residuals of Fig. 10. Here the Lambertian model (top) shows slopes in time (e.g. second half of 2020 and first half of 2021). The mRPV residuals are random (i.e. a fitted slope would be 0)
Section 5.3 (TROPOMI-SWIR vs dedicated campaigns)
Page 23, Line 5: Where is MIPREP acronym defined?
> rephrashed/removed the MIPREP reference.
Page 24, Figure 12: There is a small mistake here. Please replace “VIIRS (left)” with “VIIRS (right)” in the caption. I would also like to see a statement related to the second sentence in the caption. It’s OK to omit ratios above 1.5 but it would be informative to specify how often those ratios were found. I would recommend to give a percentage for MODIS and VIIRS separately.
> Corrected. We do not give a percentage, but an absolute number, mirroring the text.
Section 6.3 (Implications for TROPOMI)
Page 27, Line 29: “As such, data on nadir-views remains inconclusive”. Probably more orbits with nadir-views should be investigated. A set of 5 orbits is a very limited sampling to draw any conclusions. Moreover, the word “data” is plural and there is a grammar issue; please correct “remains” to “remain”.
> I fully agree. The mostly manual data acquisition is optimized for OCO and GOSAT which have a much poorer coverage.
Section 7 (Conclusions)
Page 28, Line 24: “Vicarious calibration limits are an order of magnitude larger (4-10%).” It is not clear to me what do the authors mean with this statement and to which findings/comparisons they refer to?
> it is to the on-board results in the previous bullet. Rephrased for clarity.
Citation: https://doi.org/10.5194/egusphere-2023-89-AC2
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-89', Anonymous Referee #1, 10 Mar 2023
Review of "Vicarious Calibration of the TROPOMI-SWIR module over the
Railroad Valley Playa" by Tim A. van Kempen et al., submitted to AMT, 2023
This paper describes both the results of the on-board calibration/monitoring of the TROPOMI SWIR module, and of independent vicarious ground-based calibration (or should that be verification) of the (absolute) radiometric accuracy and stability. It is shown from the on-board diagnostics that the SWIR module is extremely stable and suffers very little pixel loss. In a second theme, the ground-based vicarious calibration is described with details about the various required correction steps to make ground-based and satellite data truly comparable, although several limits are shown to be unavoidable (at this location, with this satellite sounder). In spite of the care taken to address these corrections, the vicarious method is shown to provide an independent but much less stringent assessment of the radiometric stability of the SWIR module.The paper is very well written: clear language, logical structure, clear supporting figures (with too small lables) and the underlying work is comprehensive and to-the-point. The part on vicarious calibration using the Railroad Valley Playa is also a first for the TROPOMI mission and as such innovative. Besides a few minor comments listed below, my only gripe is with the conclusions in the abstract regarding the vicarious calibration: I think it may be made more clear that the vicarious calibration following the current state-of-the-art has severe limitations for an instrument such as TROPOMI and cannot compete with onboard calibration. Or phrased differently: onboard tools to allow the latter are still a must (as you state in the conclusions section).
As said, please find below some more minor remarks/questions that arose during my 1st reading of the paper (some may be answered further on in the paper, but that wasn't clear at 1st sight). I list these as they may help the authors to fine-tune the reading experience.
=====================
Abstract:
- You speak about vicarious calibration, but is it actually calibration, i.e. are correction factors derived and applied, or is it only verification?
- The last sentence about added value in the vicarious calibration w.r.t. the onboard calibration is very vague. Probably because your end conclusion is actually rather the opposite: that there is hardly any added value (if there are on-board calibration means)?Introduction:
- line 11: perhaps specify that this is about the SWIR detector as the numbers (in particular the resolution on the ground) are different for the UVN module.
- some unexplained acronyms (RADCALNET, JPL LED), also happens in other locations in the paper.Section 2.1:
- line 1: from on-ground measurements -> from pre-flight on-ground measurements
- Transmission: this is meant as both transmission along the optical path and detector sensitivity?Section 2.3:
- line 1 on p8: pixels -> pixel's?Section 3.1:
- is this poorer spectral resolution of RADCALNET (10nm) still fine enough to really probe only the continuum you're aiming for?Section 3.2:
- You write: "This variation is dominated by the varying viewing angles, represented by the Bidirectional Reflection Distribution Function (BRDF) of the
non-Lambertian desert surface." How do you know? Is there a strong correlation with viewing angles but not solar angles or e.g., aerosol load?Section 4.1.2:
- Aerosols are not considered in RemoteC?Section 4.1.3:
- Please justify the use of a linear extrapolation (I guess this is what you expect for Mie scattering at these wavelengths)?Section 4.2.1:
- I'm not sure the full mathematical description is of the model is needed here, but if you provide it, it would be nice to have some description of what the different parameters physically mean (e.g. r_0,k,b), if they can be given an intuitive meaning.Section 4.3.2:
- What causes the remaining variability, besides the SZA evolution?Section 5.1:
- Fig. 9: I don't think I understand the units of the y-axis, the range seems huge for a multiplicative correction factor. How is this correction factor applied?
- General comment on figures throughout the paper: many labels are rather small. E.g., in Figure 12.Section 6.2.1:
- Maybe I missed it, but would it not be an interesting exercise to apply this (probably too strict) TROPOMI CH4 cloud filter? You'll lose a lot of data, but at least cloud contamination should be minimal.Section 6.3:
- The first sentence sounds a bit overconfident. I guess you mean "For TROPOMI-SWIR, vicarious calibration cannot be improved upon from the RRV analysis presented in this work."
- 1st paragraph: so a more detailed mapping of the BRDF over the entire RRV (allowing you to drop the homogeneity assumption) would be a great step forwards? Then again, looking at Figure 2, it seems many pixels even reach outside the valley itself.
- At some point, true horizontal sensitivity over the pixel will probably also be an issue (i.e., it not being top-hat like but rather a super-Gaussian of some sort?).Citation: https://doi.org/10.5194/egusphere-2023-89-RC1 -
AC1: 'Reply on RC1', Tim van Kempen, 16 Jun 2023
>>
Dear referee,Thank you for the kind words. Please find below responses that complement the changes that were put in the new version of the paper. Indeed, the usage of onboard calibration should be highlighted even more.
I also want to apologise for the very long delay in this reaction, and subsequently thank you for your patience. Due to the absence of a second referee, this response has been sitting on my harddrive a long time.with kind regards,
Tim
=====================
Abstract: - You speak about vicarious calibration, but is it actually calibration, i.e. are correction factors derived and applied, or is it only verification?> Indeed, as applied to TROPOMI-SWIR, is it only a verification. Although correction factors could have been derived (with large uncertainties), these are never applied to TROPOMI-SWIR ready. However, to introduce confusing differences with existing literature in this field, we elected to keep the term 'Vicarious Calibration'
- The last sentence about added value in the vicarious calibration w.r.t. the onboard calibration is very vague. Probably because your end conclusion is actually rather the opposite: that there is hardly any added value (if there are on-board calibration means)?
> We added an additional sentence. In our opinion there is added value. Despite the very large uncertainties, vicarious calibration remains an independent verification. In addition, it also reveals how design choices of new instruments must be made to make more effective use of vicarious calibration.
Introduction: - line 11: perhaps specify that this is about the SWIR detector as the numbers (in particular the resolution on the ground) are different for the UVN module.
> corrected
- some unexplained acronyms (RADCALNET, JPL LED), also happens in other locations in the paper.> corrected
Section 2.1: - line 1: from on-ground measurements -> from pre-flight on-ground measurements
> corrected
- Transmission: this is meant as both transmission along the optical path and detector sensitivity?> This was not our intent, but effectively, you are correct. This is due to the ordering of sections 2.1, 2.2 and 2.3. By changing the order (2.1 is now the last), this ambiguity has been removed. The transmission should only aim towards the optical path. The detector sensitivity is in part covered by the dark flux and degradation.
Section 2.3: - line 1 on p8: pixels -> pixel's?> corrected
Section 3.1: - is this poorer spectral resolution of RADCALNET (10nm) still fine enough to really probe only the continuum you're aiming for?
> No it is not. This is discussed in 4.4.1. Apart from the broad response of 10 nm, the shape of the response (triangular) also poses issues.
Section 3.2: - You write: "This variation is dominated by the varying viewing angles, represented by the Bidirectional Reflection Distribution Function (BRDF) of the non-Lambertian desert surface." How do you know? Is there a strong correlation with viewing angles but not solar angles or e.g., aerosol load?
> First, it should be that it is dominated by varying viewing AND solar angles. Thank you for catching this error.
This is an effective conclusion from the pair of papers from Bruegge et al., 2019a and 2019b.
Although aerosol load could play a major role, the effective Aerosol optical depth (AOD) at 2.3 micron above RRV is very low. Even with large AOD factors in the visible, the term remains relatively neglible at 2.3 micron.
Section 4.1.2: - Aerosols are not considered in RemoteC?> yes it is. Changed the text to reflect it.
Section 4.1.3: - Please justify the use of a linear extrapolation (I guess this is what you expect for Mie scattering at these wavelengths)?
> changed
Section 4.2.1: - I'm not sure the full mathematical description is of the model is needed here, but if you provide it, it would be nice to have some description of what the different parameters physically mean (e.g. r_0,k,b), if they can be given an intuitive meaning.> this is explained in Bruegge et al., 2019a, but more thoroughly in the Rahman papers from 1993. Conceptually it is not easy to relate the parameters to physical phenomena.
Section 4.3.2: - What causes the remaining variability, besides the SZA evolution?> unknown. we hypothesized even small variations in elevations (or even surface 'roughness') causes small-scale shadowing. But there is no evidence to support this hypothesis, so it was decided to leave this out of the paper. Similarly, we discussed whether or not a very small component of the total emission is not due to the reflection, but due to thermal emission of a hot surface. at 300-320 K, blackbody radiation at 2.3 micron is nonzero. But we have no surface temperature measurements to confirm/disprove this.
Section 5.1: - Fig. 9: I don't think I understand the units of the y-axis, the range seems huge for a multiplicative correction factor. How is this correction factor applied?
> This is indeed a multiplicative factor that is applied on the raw spectrum. This is never shown as the resulting spectrum are the blue spectra in Fig. 11. Some extreme values were not applied, although some extreme values indeed show a difference of almost an order of magnitude in the resulting comparison similar to Fig. 11 (i.e. a simulated spectrum would be 7 times brighter than the observed).
- General comment on figures throughout the paper: many labels are rather small. E.g., in Figure 12.> We tried changing the fonts and font sizes, but were relatively unhappy with the results.
Section 6.2.1: - Maybe I missed it, but would it not be an interesting exercise to apply this (probably too strict) TROPOMI CH4 cloud filter? You'll lose a lot of data, but at least cloud contamination should be minimal.
> Yes, this was done, with relatively poor results. Due to the high albedo in general, the fits of CH4 above whiter deserts such as RRV (and similarly salt flats) do not always converge, even in known cloudless conditions. You indeed get very very few reliable points (10 per year)
Section 6.3: - The first sentence sounds a bit overconfident. I guess you mean "For TROPOMI-SWIR, vicarious calibration cannot be improved upon from the RRV analysis presented in this work."> changed
- 1st paragraph: so a more detailed mapping of the BRDF over the entire RRV (allowing you to drop the homogeneity assumption) would be a great step forwards? Then again, looking at Figure 2, it seems many pixels even reach outside the valley itself.
> Indeed, after submission we looked at this using VIIRS BRDF data interpolated/modelled, but did not get an improvement.- At some point, true horizontal sensitivity over the pixel will probably also be an issue (i.e., it not being top-hat like but rather a super-Gaussian of some sort?).
> very likely the spatial response function (i.e. the horizontal sensitivity) already plays a role. But we are unable to assess how big. We theorized that the systematic increase seen in the median of the mRPV model comparisons of 2-3 % (Table 4) might be due to this effect. However, with the larger uncertainties of all the other effects, we cannot, and likely will never be able to, discern between statistical effects and the spatial response of a pixel.
Citation: https://doi.org/10.5194/egusphere-2023-89-AC1
-
AC1: 'Reply on RC1', Tim van Kempen, 16 Jun 2023
-
RC2: 'Comment on egusphere-2023-89', Anonymous Referee #2, 05 Jun 2023
Review of "Vicarious Calibration of the TROPOMI-SWIR module over the Railroad Valley Playa" by Tim A. van Kempen et al., submitted to AMT, 2023
The manuscript with title “Vicarious Calibration of the TROPOMI-SWIR module over the Railroad Valley Playa” presents an innovative work and it would worth being published in AMT because of its novelty on applying vicarious calibration technique for the first time on one TROPOMI band. Vicarious calibration refers to the technique of making use of natural sites on the Earth’s surface, like the Railroad Valley Playa, to post-launch calibration of satellite sensors. The findings of the authors are important primarily because they verify instrument’s stability from the beginning of the nominal operation after the commissioning phase until end of 2021. TROPOMI proves to be an instrument with high quality of measured radiances in the SWIR spectral band based on the analysis that has been performed in this manuscript. Moreover, it could be a starting point for other researchers to try this approach for the UVN spectral bands of TROPOMI too. It is indeed proved to be an additional valuable tool in validating radiance-level performances for TROPOMI channels as stated in the abstract.
I found the manuscript well-written, easy for the reader to follow the methodology and results. The English language is used in a clear way. I had minor troubles to understand or figure out what the authors want to say in few points. For those points where I had problems to follow the text, I would ask the authors to rephrase the sentences and explain in more detail what has been done. In some parts of the manuscript, I found that the sentences are too direct and slightly inappropriate for a scientific paper. Sentences like “See Kleipool et al. (2018), van Kempen et al. (2019) and Ludewig et al. (2020).” (in line 21 of page 4) could be fine for a technical document or a validation report but certainly not so good for a peer-reviewed journal paper. The reader expects that the authors refer to specific references when presenting specific arguments and not only suggesting literature to the reader. Minor typos or small grammar mistakes should be corrected before publishing this manuscript. Captions from some Figures and Tables might also be revised because they are too short and some times not enough informative. For instance, in Table 3, the letter “F” which is often mentioned next to the MODIS and VIIRS ratios, should be explained in the caption of the Table. The readers might need to spend some time to find this information in the text.
My specific comments for improving the content and the overall quality of the manuscript are listed below separately for each section.
Abstract
- Page 1, Line 7: I would suggest to give a short description of the location Railroad Valley Playa in the abstract. Currently, there is only reference to the region but why this is special and appropriate region for vicarious calibration of TROPOMI-SWIR module is missing. I think this is something that the reader wants to know already when reading the abstract as it is mentioned in the title of this manuscript.
- Page 1, Line 10: I would suggest to provide references for the vicarious calibration of OCO-2 and GOSAT missions in the abstract.
- Page 1, Line 15: infra-red -> infrared without “-“.
Section 1 (Introduction)
- Page 2, Line 12: What happened with the 25 columns and 39 rows? What do the authors mean with the statement “about 975 columns and 217 rows are illuminated”?
Section 2 (TROPOMI-SWIR Performance)
- Page 4, Line 1: I would recommend to the authors to clarify what do they mean by “line-free”. Probably, they mean that there is no absorption from the atmospheric gases in this wavelength range, but it is not very clear.
- Page 4, Line 4: Please add “at nadir” after “7 x 5.5 km2”.
- Page 4, Line 5: The change in the integration time should have been effective since August 2019 and not 2020. Please cross-check this important date. Moreover, I would propose to rephrase the sentence as “effectively changing the spatial resolution in the along-track direction from …”.
- Page 4, Line 13: The authors refer to the instrument zenith angle (i.e., distance from the nadir pixel) and just one line above they refer to the viewing zenith angles. What is the difference that the authors imply between the two zenith angles? I would recommend them to give the definitions.
- Page 4, Line 21: As mentioned above, I would consider such way of writing a sentence like “See Kleipool et. (2018) …” not so formal for a scientific paper. Similar type of sentences present in few more points of the manuscript should be removed or restructured.
Section 2.1 (Transmission stability)
- Page 5, Line 5: I would recommend to delete the sentence “It is stable over the first year”.
Section 2.2. (Dark Flux)
- Page 7, Line 8: I suggest to replace “At the change in along-track spatial resolution” with the exact date.
Section 3 (Reference Data)
- Page 9, Line 10: I would suggest to replace “see 3.1” with “as described in Section 3.1”. I find the direct statement “see …” informal for the paper. I would recommend the same for “see 3.2” of line 11 and “see 3.3” of line 12.
- Page 9, Line 10: I suggest to write “… consists of ground measurements from dedicated campaigns” instead of “dedicated campaign ground measurements”.
Section 3.1 (RADCALNET)
- Page 9, Line 19: I would rephrase “…instrument suite which can provide data at several times during the TROPOMI overpasses”.
- Page 9, Line 22-23: The sentence “For more on the RADCALNET dataset, … on the website” seems informal to me. I would try to rephrase it.
Section 3.2 (RRV campaigns)
- Page 9, Line 27: ASD is an acronym which I cannot find in the text.
- Page 9, Line 28: I would change to “an area of 500 x 500 m2” instead of “500 meters by 500 meters”. In general, the sentence “The field measurements of …. TROPOMI at nadir” is a bit too long and complicated for me to follow it well.
- Page 10, Line 1: JPL is an acronym which is not introduced in the text.
- Page 10, Line 7: The word “from” is missing -> “varying from day to day”.
Section 3.3 (Ancillary data)
- Page 11, Line 4: Where is mRPV defined in the text earlier? I only see a reference in Line 8 but the acronym needs to be defined in the point that it is used for the first time in the text.
Section 3.3.1 (MODIS)
- Page 11, Line 17: I would recommend to rephrase the sentence “As such it is … in the next section”.
Section 3.3.3 (MISR)
- Page 12, Line 3: I would recommend to rephrase the sentence “Similar to the MODIS … to be irregular”.
Section 4 (Correction methodology)
- Page 12, Line 12: I would add “in Section 4.1”.
Section 4.1.2 (RemoteC)
- Page 13, Line 2: I would rephrase the first sentence.
Section 4.1.3 (Aerosol Optical Depth)
- Page 13, Line 11: I would rephrase the first sentence and replace “changes from day to day” with “shows a daily variation”.
- Page 13, Line 14: AERONET with capital letters.
Section 4.2 (BRDF normalization)
- Page 13, Line 17: There is a typo. Replace “byt” with “by”.
Section 4.2.1 (mRPV)
- Page 14, Figure 6: I cannot really see any pattern or a seasonal variation for the free parameters n the mRPV model. This figure is not so informative from my point-of-view.
- Page 14, Lines 4-14: I would probably move all the mathematical formulas in an appendix. As far as I understand, the formulas are not originally derived for this study. Therefore, the reference to the source and the description in an appendix would be more optimal for me. Moreover, there are no explanations about the free parameters r0, k and b. What do they represent? What does “hot spot” actually mean?
Section 4.3.1 (Spatial Averaging)
- Page 15, Line 24: “Data” is plural -> Change from “Most data is available” to “Most data are available”.
- Page 16, Line 6: “We realize this introduces …” -> “We realize that this introduces …”
- Page 16, Line 7: By how much is the error? An estimate should be given.
Section 4.3.2 (Time differences and Solar angle)
- Page 16, Line 9: “Variations also exist as a function of the time of day.” What do the authors mean with this statement? It is not clear to me.
- Page 16, Line 20: Please replace the “2nd-degree polynomial fit” with “2nd-order polynomial fit”.
Section 4.4.1 (RADCALNET)
- Page 17, Line 19: I didn’t understand how this multiplication factor of 1.247 was derived.
Section 4.4.2 (Ground campaigns)
- This section seems incomplete. Could the authors elaborate more on how the radiances are derived from the reflectances?
- Page 18, Table 2: The third column refers to IZA and in the caption TROPOMI-SWIR viewing angle is mentioned. I would recommend to keep consistency for the angles IZA and VZA. This comment is relevant to one of my former comments in a previous section.
- Page 19, Figure 9: Similarly, to the previous comment, do the authors prefer to show IZA in the label of x-axis? There is also a small inconsistency in the marker; “red triangle” should be replaced by “red diamond”.
- Page 19, Lines 5-8: I would like to ask the authors to comment further on the uncertainties which are introduced due to the assumption of spectral dependence absence.
Section 5 (Results)
- Page 20, Line 13: Elaborate more on the statement “the accuracy of the mRPV model”. How should the reader interpret this? mRPV model is not accurate enough at larger VZA?
Section 5.2 (TROPOMI-SWIR vs RadCalnet)
- In the text many times RADCALNET is writren with capital letters. Here the acronym is written as RadCalnet. I would recommend to keep consistency of the acronyms.
- Page 22, Line 2: “Structural trends in the residuals are seen for the Lambertian model, but not the mRPV model.” I cannot see these trends. I would ask the authors to elaborate more on it.
Section 5.3 (TROPOMI-SWIR vs dedicated campaigns)
- Page 23, Line 5: Where is MIPREP acronym defined?
- Page 23, Line 22: I would replace “There does appear to” with “It appears to”.
- Page 24, Figure 12: There is a small mistake here. Please replace “VIIRS (left)” with “VIIRS (right)” in the caption. I would also like to see a statement related to the second sentence in the caption. It’s OK to omit ratios above 1.5 but it would be informative to specify how often those ratios were found. I would recommend to give a percentage for MODIS and VIIRS separately.
- Page 25, Table 3: Please include in the caption what “F” stands for. The readers would appreciate this information written here.
- Page 26, Table 4: Do the numbers refer to the mean and median ratios of Table 3 using all orbits? The authors should include this information in the caption.
Section 6.2.1 (Cloud Filtering)
- Page 27, Line 8-9: “Due to the albedo of cloud, TROPOMI-SWIR radiance may be different than estimated from the RADCALNET.” Please elaborate more on this and give an estimate of this effect.
Section 6.3 (Implications for TROPOMI)
- Page 27, Line 29: “As such, data on nadir-views remains inconclusive”. Probably more orbits with nadir-views should be investigated. A set of 5 orbits is a very limited sampling to draw any conclusions. Moreover, the word “data” is plural and there is a grammar issue; please correct “remains” to “remain”.
Section 7 (Conclusions)
- Page 28, Line 24: “Vicarious calibration limits are an order of magnitude larger (4-10%).” It is not clear to me what do the authors mean with this statement and to which findings/comparisons they refer to?
Citation: https://doi.org/10.5194/egusphere-2023-89-RC2 -
AC2: 'Reply on RC2', Tim van Kempen, 16 Jun 2023
>> Dear Referee,Thank you for the kind words and the suggestions. We have adopted them in the new version of the paper that has been submitted. Specific comments to specific suggestions are given below. If no comment is given, we have adopted your comments in full.
with kind regards,
Tim van Kempen
-----
Abstract
Page 1, Line 7: I would suggest to give a short description of the location Railroad Valley Playa in the abstract. Currently, there is only reference to the region but why this is special and appropriate region for vicarious calibration of TROPOMI-SWIR module is missing. I think this is something that the reader wants to know already when reading the abstract as it is mentioned in the title of this manuscript.
> included in the abstract.
Page 1, Line 10: I would suggest to provide references for the vicarious calibration of OCO-2 and GOSAT missions in the abstract.> The guidelines of the AMT journal dictates that references should not appear in the abstract. They are given in the introduction (Kuze et al., 2014, Bruegge et al., 2019a, 2019b and 2021)
Section 1 (Introduction)
Page 2, Line 12: What happened with the 25 columns and 39 rows? What do the authors mean with the statement “about 975 columns and 217 rows are illuminated”?
> The columns are not all illuminated due to a cover that is on the detector. The rows are not all illuminated due to the exit angle of the dispersing element.
Section 2 (TROPOMI-SWIR Performance)
Page 4, Line 5: The change in the integration time should have been effective since August 2019 and not 2020. Please cross-check this important date. Moreover, I would propose to rephrase the sentence as “effectively changing the spatial resolution in the along-track direction from …”.> thank you, this was indeed a typo.
Page 4, Line 13: The authors refer to the instrument zenith angle (i.e., distance from the nadir pixel) and just one line above they refer to the viewing zenith angles. What is the difference that the authors imply between the two zenith angles? I would recommend them to give the definitions.> these are the same, clarified.
Section 2.1 (Transmission stability), Section 2.2. (Dark Flux)> We would like to inform the referee that the order of sections 2.1, 2.2 and 2.3 has changed for clarity.
Section 3.2 (RRV campaigns)
Page 9, Line 28: I would change to “an area of 500 x 500 m2” instead of “500 meters by 500 meters”. In general, the sentence “The field measurements of …. TROPOMI at nadir” is a bit too long and complicated for me to follow it well.
> rephrased
Page 10, Line 1: JPL is an acronym which is not introduced in the text.
> in the current version it is used in the introduction.
Section 4.2.1 (mRPV)
Page 14, Figure 6: I cannot really see any pattern or a seasonal variation for the free parameters n the mRPV model. This figure is not so informative from my point-of-view.
> the main pattern is the correlation between the three parameters and the visualisation that the r_0, k and b parameters are not static for the desert surface.
Page 14, Lines 4-14: I would probably move all the mathematical formulas in an appendix. As far as I understand, the formulas are not originally derived for this study. Therefore, the reference to the source and the description in an appendix would be more optimal for me. Moreover, there are no explanations about the free parameters r0, k and b. What do they represent? What does “hot spot” actually mean?> This was discussed internally before submission, but decided against. The main motivation is that the comparison between the mRPV model and VIIRS/MODIS, including the different corrections, are a central point to the paper. It is our opinion that the lessons learned of the methodology as applied to the data is a more important conclusion that the final percentages derived from it. As such putting the mathematical formulae within the paper seemed prudent. During writing we kept going back to these often to understand how these relate to the corrections.
The relation of the free parameters themselves, as well as the hot spot, as applied to RRV are extensively discussed in Bruegge et al., 2019a, 2019b, in addition to the original papers from Rahman et al., 1993a, 1993b. These papers are also referenced. As opposed to the formulae, we feel that the references should be sufficient.
Section 4.3.1 (Spatial Averaging)
Page 16, Line 7: By how much is the error? An estimate should be given.> we do not know, but discuss this effect in the Section 6. In fact most of the uncertainty in the final result of this paper likely is related to the large pixel size and the spatial averaging steps. We rephrased the sentence to remove the ambiguity and refer to the uncertainties seen in earlier studies.
Section 4.3.2 (Time differences and Solar angle)
Page 16, Line 9: “Variations also exist as a function of the time of day.” What do the authors mean with this statement? It is not clear to me.
> sentence remove the ambiguity.
Section 4.4.1 (RADCALNET)Page 17, Line 19: I didn’t understand how this multiplication factor of 1.247 was derived.
> It is the average difference between all dates which have a cloudless ToA RADCALNET radiance at 2310 nm and a the 2313 continuum radiance seen by TROPOMI where the viewing angle is less than 3 degrees (i.e. nearly nadir-viewing). For such very small degrees, there is assumed to be no correction nBRDF needed. The difference is solely due to the absorption of methane and water in the RADCALNET bandwidth (~10 nm width triangular)
Section 4.4.2 (Ground campaigns)
This section seems incomplete. Could the authors elaborate more on how the radiances are derived from the reflectances?
> This is described in 4.1.2. RemoteC is used to calculate the radiative transfer of irradiance values through reflection on the RRV surface back to the TROPOMI entry.
Page 19, Lines 5-8: I would like to ask the authors to comment further on the uncertainties which are introduced due to the assumption of spectral dependence absence.
> added. This uncertainty is assumed to be very minor.
Section 5 (Results)
Page 20, Line 13: Elaborate more on the statement “the accuracy of the mRPV model”. How should the reader interpret this? mRPV model is not accurate enough at larger VZA?
> Yes. The model was build upon many measurements of the Earth in the 90s, none of which had angles larger than 40-45 degrees at the time. We do note that the model significantly outperforms both the MODIS and VIIRS data products at these larger angles.
Section 5.2 (TROPOMI-SWIR vs RadCalnet)
Page 22, Line 2: “Structural trends in the residuals are seen for the Lambertian model, but not the mRPV model.” I cannot see these trends. I would ask the authors to elaborate more on it.> these trends are seen in the residuals of Fig. 10. Here the Lambertian model (top) shows slopes in time (e.g. second half of 2020 and first half of 2021). The mRPV residuals are random (i.e. a fitted slope would be 0)
Section 5.3 (TROPOMI-SWIR vs dedicated campaigns)
Page 23, Line 5: Where is MIPREP acronym defined?
> rephrashed/removed the MIPREP reference.
Page 24, Figure 12: There is a small mistake here. Please replace “VIIRS (left)” with “VIIRS (right)” in the caption. I would also like to see a statement related to the second sentence in the caption. It’s OK to omit ratios above 1.5 but it would be informative to specify how often those ratios were found. I would recommend to give a percentage for MODIS and VIIRS separately.
> Corrected. We do not give a percentage, but an absolute number, mirroring the text.
Section 6.3 (Implications for TROPOMI)
Page 27, Line 29: “As such, data on nadir-views remains inconclusive”. Probably more orbits with nadir-views should be investigated. A set of 5 orbits is a very limited sampling to draw any conclusions. Moreover, the word “data” is plural and there is a grammar issue; please correct “remains” to “remain”.
> I fully agree. The mostly manual data acquisition is optimized for OCO and GOSAT which have a much poorer coverage.
Section 7 (Conclusions)
Page 28, Line 24: “Vicarious calibration limits are an order of magnitude larger (4-10%).” It is not clear to me what do the authors mean with this statement and to which findings/comparisons they refer to?
> it is to the on-board results in the previous bullet. Rephrased for clarity.
Citation: https://doi.org/10.5194/egusphere-2023-89-AC2
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
285 | 131 | 23 | 439 | 13 | 12 |
- HTML: 285
- PDF: 131
- XML: 23
- Total: 439
- BibTeX: 13
- EndNote: 12
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Tim Anton van Kempen
Tim J. Rotmans
Richard M. van Hees
Carol Bruegge
Dejian Fu
Ruud Hoogeveen
Thomas J. Pongetti
Robert Rosenberg
Ilse Aben
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1330 KB) - Metadata XML