the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
An information content approach to diagnosing and improving CLIMCAPS retrievals across instruments and satellites
Abstract. The Community Long-term Infrared Microwave Combined Atmospheric Product System (CLIMCAPS) characterizes the atmospheric state as vertical profiles (commonly known as soundings or retrievals) of temperature, water vapor, CO2, CO, CH4, O3, HNO3 and N2O, together with a suite of Earth surface and cloud properties. The CLIMCAPS record spans more than two decades (2002–present) because it utilizes measurements from a series of different instruments on different satellite platforms. Most notably, these are AIRS+AMSU (Atmospheric Infrared Sounder + Advanced Microwave Sounding Unit) on Aqua and CrIS+ATMS (Cross-track Infrared Sounder + Advanced Thermal Microwave Sounder) on SNPP and the JPSS series. Both instrument suites are on satellite platforms in low-Earth orbit with local overpass times of ~1:30 am/pm. The CrIS interferometers are identical across the different platforms, but differ from AIRS, which is a grating spectrometer. At first order, CrIS+ATMS and AIRS+AMSU are similar enough to allow a continuous CLIMCAPS record, which was first released in 2020 as Version 2 (V2). In this paper, we take a closer look at CLIMCAPS V2 soundings from AIRS+AMSU (on Aqua) and CrIS+ATMS (on SNPP) to diagnose product continuity across the two instrument suites. We demonstrate how averaging kernels, as signal-to-noise ratio (SNR) indicators, can be used to understand and improve multi-instrument systems such as CLIMCAPS. We conclude with recommendations for future CLIMCAPS upgrades.
- Preprint
(12801 KB) - Metadata XML
-
Supplement
(174 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-2448', Anonymous Referee #1, 30 Oct 2024
Review of "An information content approach to diagnosing and improving
CLIMCAPS retrievals across instruments and satellites", Smith and Barnet.This manuscript describes use of several metrics metrics (cloud clearing, degrees of freedom (DOF) from the averaging kernel matrix, and the amplitude of the retrieval update related to the prior) to evaluate thermal IR retrievals. These metrics are applied to output from the operational V2 CLIMCAPS retrieval system applied to two sensor systems (AIRS/Aqua and CrIS/SNPP) along with several experimental versions of CLIMCAPS. Comparisons between output from the different experimental versions leads to an explanation for some of the inconsistency noted in CLIMCAPS output from the two sensors, noted in earlier work (Smith and Barnet 2020). The result is a plan for a potential version 3 CLIMCAPS that would have improved consistency between output from the two sensors.
The paper is generally well structured and written and appropriate for AMT. I suggest some minor revisions to improve the clarity and presentation, particularly for the figures. Minor grammar corrections and some suggested rewordings are listed at the end.
Minor revisions
---------------The aim of the analysis of the experimental CLIMCAPS versions was to improve the *consistency* between the AIRS and CrIS results; the results are in important step forward in that regard, and I think the manuscript title should more directly reflect that. If possible at this stage, I would suggest changing the manuscript title to
"An information content approach to diagnosing and improving consistency of CLIMCAPS retrievals across instruments and satellites".
My only technical critique are the two paragraphs in section 4 that deal with Figure 7 (lines 566 - 594). Figure 7 contains RMSE relative to GFS; without substantial other analysis comparing GFS and MERRA-2, it is very unclear what these plots are revealing. If a lower RMSE relative to GFS is evidence of "higher accuracy" of CLIMCAPS, that must be making an implicit assumption that GFS is itself more accurate than MERRA-2. I am not an expert on NWP models, but that assumption needs extra support and references. If GFS and MERRA-2 have similar accuracy, or if MERRA-2 has better accuracy, then the RMSE metric relative to GFS would not tell us anything about CLIMCAPS.
I feel this section is not adding anything to the paper in the current form. My suggestion is to remove this figure and the two corresponding paragraphs from the paper; otherwise this needs more explanation to describe what this is revealing about the CLIMCAPS retrievals.
At the beginning of section 2 (Lines 116 - 124) there is description of some challenges of developing the CLIMCAPS retrieval system. This seems out of place here, and would be better included as part of the discussion at the end of the paper. Also, one of the unique aspects of CLIMCAPS is that effectively spans across two missions - this introduces some funding/support challenges as it seems NASA support is generally mission-centric. It may be useful to expand on this issue since the authors have first hand experience here.
At line 372-375 there is a short section about the Planetary Boundary Layer. None of the analysis in the paper is targeted at PBL performance, so this is distracting from the central themes of the paper. I would recommend removing these few sentences.
Figures 1,2,3,8 are using Rainbow colormaps, which do not faithfully represent the data and are not interpretable by readers with color vision impairment:
https://hess.copernicus.org/articles/25/4549/2021/
https://doi.org/10.1175/BAMS-D-13-00155.1
https://matplotlib.org/stable/users/explain/colors/colormaps.htmlInstead, use a perceptually uniform colormap, or a diverging colormap (blue-with-red) for the difference plots such as Figure 8, where the data are centered across zero. Examples can be found in the python matplotlib documentation:
https://matplotlib.org/stable/gallery/color/colormap_reference.html
Figure 4 - Suggest using a logarthmic y-scale for the spectral [RTAerr] plot. This would still make it obvious at the AIRS error spectrum is much smaller than the CrIS FSR error spectrum, but also show the spectral shape of the AIRS error spectrum which is of interest in it's own right.
Figure 6 - the overplotted lines are very unclear, particularly in panel (b) with 4 lines over top each other. Instead of plotting the range of variation as a horizontal error bar, I recommend plotting the range with either thin dotted lines, or use a filled semitransparent polygon (see Figure 4 of https://amt.copernicus.org/articles/17/6223/2024/). Either of those options would remove the horizontal line of the error bar, which is cluttering the graphs.
Finally, I suggest splitting apart (b) and (c) into multiple subplots - in each case, plot only two cases together, e.g., plot together the "V2" data (as the baseline performance) along with one of the test configurations.
Also, the range of variation is plotted as +/- 1 Std Dev, but I might expect these distributions could be very skewed. If switching to using percentiles to mark the range reveals skewed distributions, then those should be used instead of the standard deviation.
Grammatical corrections
-----------------------Line 173
"discreet" -> "discrete"Line 185
"AK" - I don't think this was defined as "Averaging Kernel" yet - it looks like this described around line 502.Line 211
"ADIFF > 0" - an absolute value is implied here, I think. I would make it explicit to be more clear ("|ADIFF| > 0")Line 254
"Cloud clearing is spatially linear, so cloud-cleared radiances are retrieved first."
I would think the primarily rationale is that the radiative effect of clouds are by far the largest control on the measured radiance, and that the spectral perturbation impacts most of the spectral range.Line 257
"(iii) [h2o_vap] vapor is highly non-linear, but it can be retrieved with accuracy once [air_temp] is known"
Similar to above - doesn't Water vapor need to be the first concentration retrieval, because it is the largest perturbation (after clouds and temp profile), and it covers so much of the spectrum?Line 286
"can primarily be attributed to the time difference in observation between the two satellites"
Please remind the reader what the approximate time difference is for the data under analysis (2016).Line 291
Maybe missing some words here, I think it needs to say:
"...while the AIRS FOVs maintain their alignment with the cross-track scan direction irrespective of view angle."Line 320
"... the fact that AIRS requires a frequency correction "
Is there a reference for this? I am unaware of this effect, or what it means.Line 335
"... the two MERRA2 reanalysis fields on either side of the measurement in space and time, and interpolating to the exact location."
Is this linear interpolation? (please state in the text) - if it is something more complicated than linear interpolation, is there a reference available?Line 346
"AK,s" -> "AK's"Line 367
"degreasing" -> "decreasing"Line 429
" ...and thus remove a significant portion of the geophysical noise otherwise present in the CrIS interferograms"
I think this should refer to measurement noise, not geophysical noise? I am not sure how apodization of the spectrum would change the geophysical noise.Footnote in Table 1 (around line 444) - I think this should refer to Figure 5, not Figure 4.
Line 494
"SNR vary" -> "SNR varies"Line 535
Should this be B_max = 0.2 ? (Not 2.0)Line 666
"now absent" -> "not absent"Line 667
"systematically propagating" -> "systematic propagation"Citation: https://doi.org/10.5194/egusphere-2024-2448-RC1 -
RC2: 'Comment on egusphere-2024-2448', Anonymous Referee #2, 22 Nov 2024
The paper introduces new settings related to the CLIMCAPS retrieval for different instruments and satellites. It is well-structured and provides detailed explanations, but some sections are overly lengthy and could be revised for conciseness. Additionally, there are several minor corrections that need to be addressed.
major revisions:
- The abstract provides extensive details about the instruments, satellites, and the CLIMCAPS system, but it lacks a focus on the results and the novelty of the paper. The results and their implications are only briefly summarized. Please conside a more balanced abstract.
- The introduction is too long, spanning about three pages. It contains excessive background information, such as discussions on MODIS, VIIRS, and CERES, which are not used in this paper and are therefore irrelevant. The introduction could be condensed to focus more on the study's specific goals and context.
- Section 2 is also too lengthy. For example, there is a detailed explanation of NUCAPS and AST-Aqua V7, which, while useful for comparison, could be summarized more. The discussion of dynamic regularization and SVD could also be more concise.
- Section 3, Experimental Design, also repeats certain concepts multiple times. For instance, the explanations about the sequential retrieval process and the covariance matrix are repeated unnecessarily.
minor correction:
- Line 63: NASA AST: what is AST stand for? It was not mentioned in the paper.
- Line 117: What is NUCAPS stand for?
- Line 184: The acronym AKs is introduced for the first time in line 184, but its meaning is only explained later in Section 2.3. and again later in Fig.6 (section 4) was mentioned about Averaging Kernel. Please consider this inconsitency.
- Line 274: The same story for „FOR“, it was defined for the 1st time in line 274 but it was used several time before it for example in line 210 , …
Data sets
CLIMCAPS-Aqua V2 record Chris Barnet and NASA Sounder SIPS https://doi.org/10.5067/JZMYK5SMYM86
CLIMCAPS-SNPP full spectral resolution V2 record Chris Barnet and NASA Sounder SIPS https://doi.org/10.5067/62SPJFQW5Q9B
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
101 | 25 | 6 | 132 | 18 | 0 | 0 |
- HTML: 101
- PDF: 25
- XML: 6
- Total: 132
- Supplement: 18
- BibTeX: 0
- EndNote: 0
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1