the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
An information content approach to diagnosing and improving CLIMCAPS retrievals across instruments and satellites
Abstract. The Community Long-term Infrared Microwave Combined Atmospheric Product System (CLIMCAPS) characterizes the atmospheric state as vertical profiles (commonly known as soundings or retrievals) of temperature, water vapor, CO2, CO, CH4, O3, HNO3 and N2O, together with a suite of Earth surface and cloud properties. The CLIMCAPS record spans more than two decades (2002–present) because it utilizes measurements from a series of different instruments on different satellite platforms. Most notably, these are AIRS+AMSU (Atmospheric Infrared Sounder + Advanced Microwave Sounding Unit) on Aqua and CrIS+ATMS (Cross-track Infrared Sounder + Advanced Thermal Microwave Sounder) on SNPP and the JPSS series. Both instrument suites are on satellite platforms in low-Earth orbit with local overpass times of ~1:30 am/pm. The CrIS interferometers are identical across the different platforms, but differ from AIRS, which is a grating spectrometer. At first order, CrIS+ATMS and AIRS+AMSU are similar enough to allow a continuous CLIMCAPS record, which was first released in 2020 as Version 2 (V2). In this paper, we take a closer look at CLIMCAPS V2 soundings from AIRS+AMSU (on Aqua) and CrIS+ATMS (on SNPP) to diagnose product continuity across the two instrument suites. We demonstrate how averaging kernels, as signal-to-noise ratio (SNR) indicators, can be used to understand and improve multi-instrument systems such as CLIMCAPS. We conclude with recommendations for future CLIMCAPS upgrades.
- Preprint
(12801 KB) - Metadata XML
-
Supplement
(174 KB) - BibTeX
- EndNote
Status: open (until 27 Nov 2024)
-
RC1: 'Comment on egusphere-2024-2448', Anonymous Referee #1, 30 Oct 2024
reply
Review of "An information content approach to diagnosing and improving
CLIMCAPS retrievals across instruments and satellites", Smith and Barnet.This manuscript describes use of several metrics metrics (cloud clearing, degrees of freedom (DOF) from the averaging kernel matrix, and the amplitude of the retrieval update related to the prior) to evaluate thermal IR retrievals. These metrics are applied to output from the operational V2 CLIMCAPS retrieval system applied to two sensor systems (AIRS/Aqua and CrIS/SNPP) along with several experimental versions of CLIMCAPS. Comparisons between output from the different experimental versions leads to an explanation for some of the inconsistency noted in CLIMCAPS output from the two sensors, noted in earlier work (Smith and Barnet 2020). The result is a plan for a potential version 3 CLIMCAPS that would have improved consistency between output from the two sensors.
The paper is generally well structured and written and appropriate for AMT. I suggest some minor revisions to improve the clarity and presentation, particularly for the figures. Minor grammar corrections and some suggested rewordings are listed at the end.
Minor revisions
---------------The aim of the analysis of the experimental CLIMCAPS versions was to improve the *consistency* between the AIRS and CrIS results; the results are in important step forward in that regard, and I think the manuscript title should more directly reflect that. If possible at this stage, I would suggest changing the manuscript title to
"An information content approach to diagnosing and improving consistency of CLIMCAPS retrievals across instruments and satellites".
My only technical critique are the two paragraphs in section 4 that deal with Figure 7 (lines 566 - 594). Figure 7 contains RMSE relative to GFS; without substantial other analysis comparing GFS and MERRA-2, it is very unclear what these plots are revealing. If a lower RMSE relative to GFS is evidence of "higher accuracy" of CLIMCAPS, that must be making an implicit assumption that GFS is itself more accurate than MERRA-2. I am not an expert on NWP models, but that assumption needs extra support and references. If GFS and MERRA-2 have similar accuracy, or if MERRA-2 has better accuracy, then the RMSE metric relative to GFS would not tell us anything about CLIMCAPS.
I feel this section is not adding anything to the paper in the current form. My suggestion is to remove this figure and the two corresponding paragraphs from the paper; otherwise this needs more explanation to describe what this is revealing about the CLIMCAPS retrievals.
At the beginning of section 2 (Lines 116 - 124) there is description of some challenges of developing the CLIMCAPS retrieval system. This seems out of place here, and would be better included as part of the discussion at the end of the paper. Also, one of the unique aspects of CLIMCAPS is that effectively spans across two missions - this introduces some funding/support challenges as it seems NASA support is generally mission-centric. It may be useful to expand on this issue since the authors have first hand experience here.
At line 372-375 there is a short section about the Planetary Boundary Layer. None of the analysis in the paper is targeted at PBL performance, so this is distracting from the central themes of the paper. I would recommend removing these few sentences.
Figures 1,2,3,8 are using Rainbow colormaps, which do not faithfully represent the data and are not interpretable by readers with color vision impairment:
https://hess.copernicus.org/articles/25/4549/2021/
https://doi.org/10.1175/BAMS-D-13-00155.1
https://matplotlib.org/stable/users/explain/colors/colormaps.htmlInstead, use a perceptually uniform colormap, or a diverging colormap (blue-with-red) for the difference plots such as Figure 8, where the data are centered across zero. Examples can be found in the python matplotlib documentation:
https://matplotlib.org/stable/gallery/color/colormap_reference.html
Figure 4 - Suggest using a logarthmic y-scale for the spectral [RTAerr] plot. This would still make it obvious at the AIRS error spectrum is much smaller than the CrIS FSR error spectrum, but also show the spectral shape of the AIRS error spectrum which is of interest in it's own right.
Figure 6 - the overplotted lines are very unclear, particularly in panel (b) with 4 lines over top each other. Instead of plotting the range of variation as a horizontal error bar, I recommend plotting the range with either thin dotted lines, or use a filled semitransparent polygon (see Figure 4 of https://amt.copernicus.org/articles/17/6223/2024/). Either of those options would remove the horizontal line of the error bar, which is cluttering the graphs.
Finally, I suggest splitting apart (b) and (c) into multiple subplots - in each case, plot only two cases together, e.g., plot together the "V2" data (as the baseline performance) along with one of the test configurations.
Also, the range of variation is plotted as +/- 1 Std Dev, but I might expect these distributions could be very skewed. If switching to using percentiles to mark the range reveals skewed distributions, then those should be used instead of the standard deviation.Â
Grammatical corrections
-----------------------Line 173
"discreet" -> "discrete"Line 185
"AK" - I don't think this was defined as "Averaging Kernel" yet - it looks like this described around line 502.Line 211
"ADIFF > 0" - an absolute value is implied here, I think. I would make it explicit to be more clear ("|ADIFF| > 0")Line 254
"Cloud clearing is spatially linear, so cloud-cleared radiances are retrieved first."
I would think the primarily rationale is that the radiative effect of clouds are by far the largest control on the measured radiance, and that the spectral perturbation impacts most of the spectral range.Line 257
"(iii) [h2o_vap] vapor is highly non-linear, but it can be retrieved with accuracy once [air_temp] is known"
Similar to above - doesn't Water vapor need to be the first concentration retrieval, because it is the largest perturbation (after clouds and temp profile), and it covers so much of the spectrum?Line 286
"can primarily be attributed to the time difference in observation between the two satellites"
Please remind the reader what the approximate time difference is for the data under analysis (2016).Line 291
Maybe missing some words here, I think it needs to say:
"...while the AIRS FOVs maintain their alignment with the cross-track scan direction irrespective of view angle."Line 320
"... the fact that AIRS requires a frequency correction "
Is there a reference for this? I am unaware of this effect, or what it means.Line 335
"... the two MERRA2 reanalysis fields on either side of the measurement in space and time, and interpolating to the exact location."
Is this linear interpolation? (please state in the text) - if it is something more complicated than linear interpolation, is there a reference available?Line 346
"AK,s" -> "AK's"Line 367
"degreasing" -> "decreasing"Line 429
" ...and thus remove a significant portion of the geophysical noise otherwise present in the CrIS interferograms"
I think this should refer to measurement noise, not geophysical noise? I am not sure how apodization of the spectrum would change the geophysical noise.Footnote in Table 1 (around line 444) - I think this should refer to Figure 5, not Figure 4.
Line 494
"SNR vary" -> "SNR varies"Line 535
Should this be B_max = 0.2 ? (Not 2.0)Line 666
"now absent" -> "not absent"Line 667
"systematically propagating" -> "systematic propagation"Citation: https://doi.org/10.5194/egusphere-2024-2448-RC1
Data sets
CLIMCAPS-Aqua V2 record Chris Barnet and NASA Sounder SIPS https://doi.org/10.5067/JZMYK5SMYM86
CLIMCAPS-SNPP full spectral resolution V2 record Chris Barnet and NASA Sounder SIPS https://doi.org/10.5067/62SPJFQW5Q9B
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
84 | 15 | 5 | 104 | 14 | 0 | 0 |
- HTML: 84
- PDF: 15
- XML: 5
- Total: 104
- Supplement: 14
- BibTeX: 0
- EndNote: 0
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1