Toward Improving Ice Cloud Microphysics Retrievals For Sub-millimeter Polarimeter-Radiometers
Abstract. Ice cloud microphysical properties, such as phase, shape, size and density, introduce significant uncertainties and biases that affect our understanding of how clouds influence weather and climate. The upcoming spaceborne sub-millimeter radiometers are expected to help in reducing the ice microphysics induced uncertainties in observations. Yet our knowledge about ice microphysics in this spectrum is still limited, and comparison/validation work is still sparse.
This paper delivers a comprehensive cross-instrument closure study using active, passive remote sensing and in-situ cloud probe measurements collected during the NASA’s IMPACTS field campaign. Through combined use of two radars, one lidar and in-situ cloud probe measurements, a comprehensive best reference "truth” is generated, which is then used to validate collocated sub-mm CoSSIR radiance measurements. We found out that only through a realistic vertical hydrometeor type classification that all types of measurements can reach a closure with minimal discrepancies between simulations and observations, which is critical for generating high-quality retrievals for frozen hydrometeors.
We further present a comprehensive exploration of the scientific merit of polarimetric measurements in improving frozen hydrometeor microphysics retrievals. For the first time, we can validate previous theoretical predictions that sub-mm polarimetric signals can be used to retrieve ice particle habit and size. Moreover, we find it is possible to differentiate detailed vertical structure of hydrometeor types using both polarimetric and radiance measurements. This paper paves concrete steps in assuring timely delivery of high-quality science products for the sub-mm radiometer missions as well as possibilities of new science products beyond the mission requirements.
This manuscript presents a closure study that employs active, passive, and in-situ measurements to explore and emphasise the importance of knowledge of hydrometeor types. In the study, a hydrometeor classification is performed and subsequently used for retrievals, exploring how hydrometeor type assumptions impact agreement between simulations and observations. Additionally, the study presents how sub-millimetre polarimetric measurements can benefit retrievals of vertical hydrometeor type distributions and ice particle size.
The study is thoroughly performed, well-described, and results are carefully validated. Publication following minor revisions is recommended. Specific comments and questions are as follows:
Section 3.1:
It is interesting to see the impact of the assumptions of hydrometeor types in Fig. 6. However, it would be helpful to also see where in the cloud the different ice classes were assumed for Figs. 6a, 6b and 6c. For example, as shown in Fig. 3e.
The authors state that "Only using the detailed hydrometeor types can we reproduce cold enough TB depressions that match observations well." (lines 291-292). This appears to be true - using the 8-class hydrometeor types leads to clear improvements. However, the simulations still differ from the observations, e.g. by ~20 K at 17.4 UTC for 325.15+-11.5 GHz. Do the authors attribute this to still imperfect hydrometeor type classification? Or is this caused by another aspect of the simulations?
Section 3.2:
The ML model was trained on the Feb. 05 case and tested on the Jan. 15 case. This is reasonable and well-motivated. Given than the Jan. 15 case was earlier described as atypical (Sect. 2.1), could the authors discuss how this might affect the generalisability of the results shown? What might the model's performance look like for more typical cases?
In Fig. 11, the two panels showing the reference (11b and 11d) are not the same. Rather, it looks like panel d has a small horizontal offset compared to panel b. Were the targets not exactly the same for both models? Or is it a plotting artifact? If the latter, then I suggest plotting the same reference for both to aid comparison between models, or at least making the difference between the two clearer through the choice of variable on the x-axis. The same applies to Fig. 12.
The improvement upon including PD measurements is very clearly illustrated in Figs. 11 and 12, and quite impressive. Although, as the authors mention, the no-PD model does appear to capture the cloud top for the multi-layer thick cloud, it does seem to struggle overall. Likewise, it gives a false cloud for the single layer cloud case. It was therefore surprising that it achieves an mIoU score of 0.545, which the authors frame earlier as a typically good result. Do the authors attribute this to the model's ability to capture the cloud top/thickness, or did it perform better in other scenes than those shown?
Additional comments:
Line 433: A reference is missing.
Line 222: Framework is misspelled.
CRS, HIWRAP acronyms are not defined. Please check acronyms throughout.
Figure 1: Top panels: What do the dots signify? The stretches shown in lower panels are not clearly marked. Lower panels lack a colorbar.
Figure 5: The interpretation of the figure would be simplified by using log10, instead of log. The color ranges are very wide and appear poorly adjusted. For example some of the color ranges reach 10, matching 22000 kg/m2. For lWC, the lower end is at -40, matching 4e-18 kg/m2. The colorbar ticks are partially covered by the colorbars. It would also be helpful to clearly differentiate in the figure where the separate overpasses start and end, if possible to do so clearly.
Figure 6: Add units to colorbars. The top height could be set lower than 15 km.
Figure 7: Unit for y-axis is missing. Label text is quite small.
Figure 10: Labels on y-axis of panel c are too small (even on screen after zooming in). The meaning of the labels must be explained. Panel c lacks a colorbar.
Figure 13: The figure text should clarify if simulations or observations shown. The colorbar label overlaps with the y-axis label.