the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Validation and comparison of cloud properties retrieved from passive satellites over the Southern Ocean
Abstract. The clouds over the Southern Ocean (SO) play a vital role in defining the Earth's energy budget. The cloud properties over the SO are known to be different from their Northern Hemisphere counterparts. As a result, monitoring cloud properties over the SO, including macro- and microphysical properties, is of particular interest.
We analysed three passive remote sensing satellite datasets, the MODIS Collection 6.1, the AVHRR CMSAF CLARA-A3, and the AVHRR PATMOS, over the SO. We validated the cloud mask, cloud top height, and cloud phase for 2015 using Level 2 data retrieved from the passive sensors with active CloudSat-CALIOP sensors. We compared the effective radius and cloud optical depth amongst the three passive sensors datasets.
This research found that there are substantial uncertainties in cloud top height, cloud optical depth, and cloud thermodynamic phase, over the SO. The extent of which varies depending on the cloud property and retrieval algorithm used. The cloud mask comparison revealed that only around two-thirds of passive sensors observations agree with active sensor observations and in the case of AVHRR PATMOS the agreement is lower. In the comparison of cloud top height, a mean absolute bias of 0.65 km (AVHRR CMSAF), 1.03 km (MODIS), and 1.31 km (AVHRR PATMOS) was observed for single-layer cloud scenes cases. This mean bias increased to 1.86 km (AVHRR CMSAF), 3.22 km (MODIS), and 3.34 km (AVHRR PATMOS) for multilayered cloud scenes. Ice phase dominates the multilayer cloud top thermodynamic phase in 2015, while liquid is the dominant top phase for single-layer cases. In general, the passive sensor and active sensor phases agree for liquid phase and ice phase except for AVHRR PATMOS, which frequently misidentified liquid phase as ice phase. In the comparison of cloud effective radius, it was observed that the disagreement between the passive sensors was greater in presence of multilayer clouds. The effective radius disagreement was largely higher for ice clouds. We found that the presence of sea ice strongly influences the retrieval of cloud optical depth at high latitudes, with most passive optical depths higher over sea ice than over ocean. This work highlights the areas where passive cloud retrieval algorithms over the SO could be improved.
- Preprint
(6819 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-209', Anonymous Referee #1, 30 Mar 2025
The Southern Ocean is an important area of cloud research and this is a worthwhile inter-comparison. The background in the Introduction is also quite thorough and does a good job of framing this study in a larger context. There are some inconsistencies with the description of the datasets, particularly PATMOS-x, and the collocation process used in Figure 7. In order to have confidence in the results of this inter-comparison these need to be addressed, and at least some of the analysis will need to be redone. Specific comments below.
Line 133: The POES satellites were operated by NOAA so I believe this would typically be NOAA Polar Operational Environmental Satellites (POES).
Line 135: It says here NOAA-19 is used for the L2 comparisons, but in Table 1 it says the cloud optical properties being compared (COD and CER) are derived from the 1.6-micron channel. Channel 3ab on the AVHRR can either measure the 1.6 or 3.75, but not at once. NOAA-19 is switched to 3b, meaning it measures the 3.75, so it shouldn’t be possible to have 1.6-micron properties for CMSAF and PATMOS-x.
Line 146: Here and elsewhere PATMOS should be PATMOS-x
Line 154: For version 6.0 ACHA uses 11 and 13.3 channels, not 11 and 12.
Line 231: I’m a little confused using the term ‘granules’ here. THE AVHRR GAC data is reported in ascending and descending orbits. Is that the meaning of granule here, or is it the global L2b data?
Line 410: retreive -> retrieve
Line 453: CMSAF and PATMOS-x both use AVHRR GAC data so they shouldn’t have different spatial resolutions for L2 products. The sampling may be different, but that’s not the same thing.
Line 603: ‘The AVHRR PATMOS sensors exhibited the lowest correlation and the highest bias in both the overall CTH and the multilayer and single-layer cases.’ The numbers reported in Figure 8 don’t seem to support this conclusion. The PATMOS-x RMSE and MBE is lower than MYD and CMSAF for the multi-layer cases and lower than CMSAF for all cases.
Figure 2.: It seems strange to me that the collocation coverage for CMSAF and PATMOS-x are different since they are derived using the same AVHRR measurements.
I suspect the odd PATMOS-x behavior in Figure 7 can be explained by a collocation problem due to an issue with how the PATMOS-x L2b global file is created. The L2b files are created by sampling orbits for a single day to a 0.1-degree global grid. Final orbits from the previous day are included because the orbit can cross from one day to another. So, if you are looking at the end of the day, and that orbit is missing, you can get a situation where the orbit used is from the previous day (this issue was addressed for recent file deliveries but still exists for most of the record). Below is a list of the orbits used in the NOAA-19 L2b ascending file from January 10th, 2015 (this is from the ‘source’ global attribute in the file). The pertinent files have been italicized. The start and end times in the last two files show a time gap, suggesting an orbit wasn’t processed. This time gap coincides with the start and end times from the first file, which is from January 9th. This gap coincides with the reported collocation time of 22:50 UTC in Figure 7, suggesting the PATMOS-x orbit being analyzed was from a day previous than Cloudsat, MODIS, and CMSAF.
clavrx_NSS.GHRR.NP.D15009.S2214.E0001.B3051617.GC.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15009.S2356.E0146.B3051718.GC.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S0140.E0335.B3051819.SV.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S0329.E0524.B3051920.WI.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S0518.E0711.B3052021.WI.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S0705.E0853.B3052122.WI.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S0847.E1029.B3052223.GC.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S1204.E1351.B3052425.GC.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S1345.E1531.B3052526.GC.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S1525.E1657.B3052627.WI.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S1652.E1836.B3052728.WI.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S1830.E2025.B3052729.GC.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S2019.E2208.B3052930.GC.hirs_avhrr_fusion.level2.nc
clavrx_NSS.GHRR.NP.D15010.S2344.E0134.B3053132.GC.hirs_avhrr_fusion.level2.nc
It's worth reiterating this is an issue with how the PATMOS-x L2b files are created and would be difficult for the authors to identify without looking at imagery to ensure the products are observing the same scenes. Regardless it brings into question the results of Figure 7, and potentially all of the L2 comparison results, depending on how frequently this issue occurs. This should be looked at more carefully.
Citation: https://doi.org/10.5194/egusphere-2025-209-RC1 -
RC2: 'Comment on egusphere-2025-209', Anonymous Referee #2, 31 Mar 2025
The paper is reasonably well organized and written. The study compares cloud properties derived from different platforms and algorithms over the Southern Ocean, which could be worthwhile if it leads to improved understanding of algorithm deficiencies and uncertainties. Unfortunately, I don’t think the paper sheds new light on these things so the purpose for the paper isn't very clear to me. Furthermore, the results are confusing, don’t make sense in some cases, and at times contradict previous findings. As a result, I find some of the results difficult to understand as presented and am concerned that the analysis may be flawed, perhaps due to collocation problems and sampling differences. I think the authors need to reevaluate and verify their methods to ensure the results are robust for all datasets.
Lines 57, 742: replace Minnis et al., 2011 with Minnis et al., 2020 as the 2020 reference is the correct one that describes the data used in the Hinkelman analysis
Minnis, P., S. Sun-Mack, Y. Chen, F.-L. Chang, C. R. Yost, W. L. Smith, Jr., P. W. Heck, R. F. Arduini, S. Bedka, Y.Yi, G. Hong, Z. Jin, D. Painemal, R. Palikonda, B. Scarino, D. A. Spangenberg, R. Smith, Q. Z. Trepte, P. Yang, and Y. Xie, 2020: CERES MODIS cloud product retrievals for Edition 4, Part 1: Algorithm changes. IEEE Trans. Geosci. Remote Sens., doi: 10.1109/TGRS.2020.3008866.
Line 133: NASA should be NOAA
Line 224: how well do MODIS along track assessments compare to the 0.25 deg resampled L3 products?
Line 265: It isn’t clear what bin the uncertain pixels were put. Please explain
Line 290-298. Check to see that the values in the text match those in the table. I think there are some discrepancies.
Line 315-316: What finding is consistent, the values or the relative differences with CALIOP? Bromwich 2012 attributed the lower CALIOP cloud fraction to sampling, right? What’s the implication for your finding here? How does this support or contradict the fact that CALIOP is more sensitive to clouds than MODIS?
Line 347: remove ‘this’
Line 355: you already stated this on line 349
Line 376: This is a great figure, but the results are suspicious, particularly for AP which looks like it may be from a different swath. You should double check this and make sure that a mistake hasn’t been made in the collocation process.
Line 381: Consider rephrasing this: “The COD shown on each product uses the passive sensors’ 1.6 μm channel.” Instead of ‘uses’, perhaps ‘was derived using’ would make more sense.
Line 394-395: Why is AC sensitive to the midlevel cloud and MODIS not sensitive? Is this possibly the result of a space/time mismatch. Did you look at the BT’s? Are you sure the sensors are viewing the same space?
Line 403-404: This statement about passive radiometers has not been proven here and is generally incorrect. It would be more appropriate to say “This suggests that the passive sensor algorithms with a single layer assumption are not well suited for distinguishing thin cirrus in some overlapping cloud conditions.” Furthermore, why is CC indicating a cloud top phase of liquid water for the clouds at 6 and 8 km? Is that trustworthy? What's the temperature at those levels?
Line 405: clouds,
Line 405-409: Here again, the results look suspicious. Why is MODIS systematically 2-3km too high. Are we learning anything here?
Line 420: convective cloud? Where is that?
Line 441: Here and elsewhere, be careful with wording that suggests active sensors serve as ground-truth for COD (e.g. passive sensors underestimate…) The lidar, radar and passive all have different sensitivities to the cloud PSD’s, and the active sensors can actually miss information in the vertical column.
Line 449-461: This section is consistent with previous sections in that the results don’t make a lot of sense and further call to question the veracity of the collocation process adopted here. Can the two AVHRR datasets have this much difference in skill? I don’t believe I’ve ever seen a MODIS hit rate this low for daytime clouds. Aren’t these results (e.g. for MODIS) also inconsistent with what you show in figure 4?
Line 473: Why would the AC algorithm be more sensitive to high thin clouds than the MODIS algorithm given how much more information is available from MODIS? Just because the AC neural net produces better mean CTH’s, it also leads to lot of height overestimates. How does this imply that AC is more sensitive to thin cirrus?
Line 475: You already stated on line 463 what Fig 8 shows.
Line 481-483: CTH ‘overestimates’ are due to inversions, where the BT matches the profile temperature at a ‘lower’ height. Do you mean higher height?
There is a good discussion of the inversion problem here (and references within):
Dong, X., P. Minnis, B. Xi, S. Sun-Mack, and Y. Chen (2008), Comparison of CERES-MODIS stratus cloud properties with ground-based measurements at the DOE ARM Southern Great Plains site, J. Geophys. Res., 113, D03204, doi:10.1029/2007JD008438.
Line 507-540: Section lacks discussion regarding difference across algorithms with respect to CER magnitudes (i.e. why is MODIS ice Re so much higher than the others), and CER distributions (i.e. why does AC ice look like AC water, unlike AP and MODIS)
Line 549-550: Why do you think the results disagree from Frey (2018) and Palm (2010)? Their results make more sense to me from a physical standpoint and from a retrieval difficulty standpoint..
Line 570-575. I’d prefer the following convention ‘more than 20% larger', rather than 'was >20%'
Line 611-612. This is an incomplete sentence.
Citation: https://doi.org/10.5194/egusphere-2025-209-RC2 -
CC1: 'Comment on egusphere-2025-209', Andrew Heidinger, 03 Apr 2025
A nice topic to explore with satellite data. My criticisms are mainly about the PATMOS-x data and how its included. I think they should be addressed or PATMOS-x should be removed. I assume that this will to climate studies and wonder if you state that is your end goal for a future study?
Line 6 - There was a project called PATMOS that was part of the EOS pathfinder efforts in the 1990s. I think you are referring the PATMOS-x (PATMOS extended) project here and should make sure to clarify the terminology
Line 130: Both Cm-SAF and PATMOS-x use AVHRR GAC which does not provide data at 1.1km as you state.
Line 155: PATMOS-x uses an optimal estimation cloud height that uses multiple channels simultaneously. They way this is written, it sounds like the CO2 channels is needed for CTH and no CTH is available using 11 and 12 micron.
Figures 3-4-5. You mention that PATMOS-x is the worst but you don't show it in these figures so it is hard to comprehend or compare.
Figure 7. PATMOS-x looks out of family and this is hard to believe. Are you certain that the correct AVHRR pass was used? There are 6 per day I think at this time. Could you provide more information if people wanted to repeat this analysis.
Line 465: I don't understand why the cloud height would not be higher without the COD limit if CIRRUS are high and thin clouds. Unsure of the point of this.
Line 400: You mention that AP's clouds are too low and it has too much ice cloud compared to water cloud. These are inconsistent with each other since ice clouds are higher than water clouds.
Figure 9: AP looks off here and does not seem in family. Other publications show it to be in family - the GEWEX Cloud Assessment and I wonder if something is wrong with the AP analysis. The lack of AP data in the maps like the other data sets would remove this suspicion.
Table 9 - results tabulated here are more inline than shown in Figure 9.
Citation: https://doi.org/10.5194/egusphere-2025-209-CC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
243 | 64 | 16 | 323 | 14 | 20 |
- HTML: 243
- PDF: 64
- XML: 16
- Total: 323
- BibTeX: 14
- EndNote: 20
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1