the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Supercooled liquid water cloud classification using lidar backscatter peak properties
Abstract. The use of depolarization lidar to measure atmospheric volume depolarization ratio (VDR) is a common technique to classify cloud phase (liquid or ice). Previous work using a machine learning framework, applied to peak properties derived from co-polarised attenuated backscatter data, has been demonstrated to effectively detect supercooled liquid water containing clouds (SLCC). However, the training data from Davis Station, Antarctica, includes no warm liquid water clouds (WLCC), potentially limiting the model’s accuracy in regions where WLCC are present. In this work, we apply the Davis model to a 9-month Micro Pulse Lidar dataset collected in Christchurch, New Zealand, a location which includes WLCC. We then evaluate the results relative to a reference VDR cloud phase mask. We found that the Davis model performed relatively poorly at detecting SLCC with an accuracy of 0.62, often misclassifying WLCC as SLCC. We then trained a new model, using data from Christchurch, to perform SLCC detection on the same set of co-polarized attenuated backscatter peak properties. Our new model performed well, with accuracy scores as high as 0.89, highlighting the effectiveness of the machine learning technique when appropriate training data relevant to the location is used.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(5807 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5807 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1085', Anonymous Referee #1, 07 Dec 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-1085/egusphere-2023-1085-RC1-supplement.pdf
- AC1: 'Reply on RC1', Luke Whitehead, 08 Jun 2024
-
RC2: 'Comment on egusphere-2023-1085', Anonymous Referee #2, 24 Feb 2024
This review addresses the manuscript of Whitehead et al., 2023, entitled “Supercooled liquid water cloud classification using lidar backscatter peak properties”. The study deals with the identification of layers of liquid water based on machine learning techniques applied to observations of a polarization-capable micro pulse lidar (MPL). While still not being super-familiar to the emerging world of machine learning, I was able to get a bit of more impressions on how AI might assist in future data analyses.
Central instrument of the study is, besides a toolset of existing Python-based machine learning APIs, a polarimetric MPL lidar system which was operated at Christchurch, CA, NZ. Three retrievals were cross-evaluated against each other. The default one, provided by the MPL software, one ML retrieval for the Antarctic site of Davis, and a newly retrieved one for the site of Christchurch.
As far as I got, the default MPL retrieval was set as the reference dataset. It was found that the ML dataset for Davis had worse scoring compared to the retrieval based directly on observations at Christchurch.
Being allover well structured and well written, the study in the end appears somewhat inconclusive. To my impression, the conclusions drawn are not suited to provide guidance for future studies. I’ve learned that machine learning classifications cannot be transferred between different sites. I wonder if this is a good basis for any future (comparative) studies. How should valid scientific conclusions be drawn when the underlying datasets are individually tuned to single sites? What do the authors think about this issue? Isn’t it more appropriate to use a physics-based retrieval which just requires a well-calibrated instrument? All further retrieval steps would then just be determined by the atmospheric state, without the inclusion of tuned ML-based decision trees.
Anyway – given the good structure and rather complete content, I consider the study as suited to be published in AMT. I nevertheless have a series of remarks and questions, which I would like the authors to reply on and to consider in the revised version of the manuscript. A second round of revision appears to me to be necessary in order to evaluate whether the identified issues/remarks could be accurately addressed.
Major comments:
- I was missing an overview on the standard procedures of cloud detection. E.g., the widely used synergistic cloud retrievals such as ARSCL or Cloudnet use a combination of Att. BSC. threshold and gradient (https://doi.org/10.2172/1808567, https://joss.theoj.org/papers/10.21105/joss.02123 ... both just require the lidar for the liquid detection) and appear to be quite successful with this. In addition, both of these retrievals use different att. BSC thresholds which demonstrates that there's not the one and only solution. This information might be relevant to discuss, e.g., in line 101 of the manuscript.
- Equation 1: This expression is only true for an ideal lidar systems with known system constants, the absence of any cross-talk between the channels and a 100% perfect polarization state of the emitted light. A general treatment of the depolarization calibration procedure is described by https://amt.copernicus.org/articles/9/4181/2016/.
- Lines 148-166: Multi layers. I don't quite get what the offset is between two subsequent layers. I'd calculate the actual offset between Q2prime and Q2second 1.1e-4-4.4e-5, which yields 6.6e-5. Can the authors please clarify? I also don’t understand the statement about the extinction in line 154. Extinction is not in units of m^-1 sr^-1.
- Minimum detection height: What is the minimum height for cloud layer detection? In Fig. 3e, it seems as if there is some certain gap between the ground and the first cloud bases. At least for the Arctic, low-level stratus clouds with a base below 100 m were recently highlighted to be a challenging but important puzzle piece in the Arctic cloud puzzle (Griesche et al., 2024; https://doi.org/10.5194/acp-24-597-2024). See the second case study in the manuscript, which indicates that there are issues near to the surface.
- 3f: Do the authors have an explanation for the gap in SLCC and WLCC at 0°C? Looking at Fig. 2f, this gap is not so pronounced in the overall LCC distribution. This discussion could be added to lines 182-183.
- Case studies 1&2, Figs 4 and 5: Based on the case studies I get really puzzled. The performance of the G22 retrievals is visually just really low, especially for the low-level clouds in the evening of case study 2 (Fig. 5). On the other hand, the reference VDR retrieval misclassifies high aerosol loads as liquid cloud in case study 1 (Fig. 4). Intuitively, I would presume that both issues could be mitigated just by using a simple combination of threshold value and gradient. Why the machine learning?
- Lines 476-477: After applying the SHAP filtering, three parameters remain as relevant: temperature, peak prominence (BSC threshold) and peak width (gradient). It’s funny to see that the cleaned version of the G22 retrieval just condenses to the same parameters as are used in the standard threshold/gradient methods.
- Figure 6: Would be nice to have a version of frequency vs. temperature. This could help to evaluate the impact of specular reflection on the retrievals, as specular reflection (false liquid) should be most prominent in the temperature range between -10 and -20°C.
- Final statement in the conclusion could be added: Which retrieval will the authors use in future for their studies? Can the authors make a decision and motivate it?
Minor comments:
- Line 15: There's actually not only collision freezing. I suggest stating that INP needs to be involved in heterogeneous nucleation of ice at temperatures between 0°C and -40°C. Citation of Hoose and Möhler (2012, https://doi.org/10.5194/acp-12-9817-2012) should do the job.
- Line 94: really just 21? I guess it's 61, isn't it? https://www2.mmm.ucar.edu/rt/amps/information/configuration/configuration.html
- Line 129: Existence of ice at T above 0°C. The statement is actually not quite true. Melting of ice depends on the dewpoint temperature. It can well exist at higher temperatures as long as dewpoint temperature (wet-bulb or ice-bulb temperature) is below 0°C. E.g., https://doi.org/10.1175/JAS-D-20-0353.1 Thus the WLCC liquid water statement should be treated somewhat relative.
- Frequently, the term ‘cloud’ appears at positions where it should actually be placed in plural form (e.g., lines 16, 29, 48). Do variants of the English language exist, where ‘cloud’ is both, plural and singular? Or are these just typos?
- Lines 183 and 250: use citep
Citation: https://doi.org/10.5194/egusphere-2023-1085-RC2 - AC2: 'Reply on RC2', Luke Whitehead, 08 Jun 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1085', Anonymous Referee #1, 07 Dec 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-1085/egusphere-2023-1085-RC1-supplement.pdf
- AC1: 'Reply on RC1', Luke Whitehead, 08 Jun 2024
-
RC2: 'Comment on egusphere-2023-1085', Anonymous Referee #2, 24 Feb 2024
This review addresses the manuscript of Whitehead et al., 2023, entitled “Supercooled liquid water cloud classification using lidar backscatter peak properties”. The study deals with the identification of layers of liquid water based on machine learning techniques applied to observations of a polarization-capable micro pulse lidar (MPL). While still not being super-familiar to the emerging world of machine learning, I was able to get a bit of more impressions on how AI might assist in future data analyses.
Central instrument of the study is, besides a toolset of existing Python-based machine learning APIs, a polarimetric MPL lidar system which was operated at Christchurch, CA, NZ. Three retrievals were cross-evaluated against each other. The default one, provided by the MPL software, one ML retrieval for the Antarctic site of Davis, and a newly retrieved one for the site of Christchurch.
As far as I got, the default MPL retrieval was set as the reference dataset. It was found that the ML dataset for Davis had worse scoring compared to the retrieval based directly on observations at Christchurch.
Being allover well structured and well written, the study in the end appears somewhat inconclusive. To my impression, the conclusions drawn are not suited to provide guidance for future studies. I’ve learned that machine learning classifications cannot be transferred between different sites. I wonder if this is a good basis for any future (comparative) studies. How should valid scientific conclusions be drawn when the underlying datasets are individually tuned to single sites? What do the authors think about this issue? Isn’t it more appropriate to use a physics-based retrieval which just requires a well-calibrated instrument? All further retrieval steps would then just be determined by the atmospheric state, without the inclusion of tuned ML-based decision trees.
Anyway – given the good structure and rather complete content, I consider the study as suited to be published in AMT. I nevertheless have a series of remarks and questions, which I would like the authors to reply on and to consider in the revised version of the manuscript. A second round of revision appears to me to be necessary in order to evaluate whether the identified issues/remarks could be accurately addressed.
Major comments:
- I was missing an overview on the standard procedures of cloud detection. E.g., the widely used synergistic cloud retrievals such as ARSCL or Cloudnet use a combination of Att. BSC. threshold and gradient (https://doi.org/10.2172/1808567, https://joss.theoj.org/papers/10.21105/joss.02123 ... both just require the lidar for the liquid detection) and appear to be quite successful with this. In addition, both of these retrievals use different att. BSC thresholds which demonstrates that there's not the one and only solution. This information might be relevant to discuss, e.g., in line 101 of the manuscript.
- Equation 1: This expression is only true for an ideal lidar systems with known system constants, the absence of any cross-talk between the channels and a 100% perfect polarization state of the emitted light. A general treatment of the depolarization calibration procedure is described by https://amt.copernicus.org/articles/9/4181/2016/.
- Lines 148-166: Multi layers. I don't quite get what the offset is between two subsequent layers. I'd calculate the actual offset between Q2prime and Q2second 1.1e-4-4.4e-5, which yields 6.6e-5. Can the authors please clarify? I also don’t understand the statement about the extinction in line 154. Extinction is not in units of m^-1 sr^-1.
- Minimum detection height: What is the minimum height for cloud layer detection? In Fig. 3e, it seems as if there is some certain gap between the ground and the first cloud bases. At least for the Arctic, low-level stratus clouds with a base below 100 m were recently highlighted to be a challenging but important puzzle piece in the Arctic cloud puzzle (Griesche et al., 2024; https://doi.org/10.5194/acp-24-597-2024). See the second case study in the manuscript, which indicates that there are issues near to the surface.
- 3f: Do the authors have an explanation for the gap in SLCC and WLCC at 0°C? Looking at Fig. 2f, this gap is not so pronounced in the overall LCC distribution. This discussion could be added to lines 182-183.
- Case studies 1&2, Figs 4 and 5: Based on the case studies I get really puzzled. The performance of the G22 retrievals is visually just really low, especially for the low-level clouds in the evening of case study 2 (Fig. 5). On the other hand, the reference VDR retrieval misclassifies high aerosol loads as liquid cloud in case study 1 (Fig. 4). Intuitively, I would presume that both issues could be mitigated just by using a simple combination of threshold value and gradient. Why the machine learning?
- Lines 476-477: After applying the SHAP filtering, three parameters remain as relevant: temperature, peak prominence (BSC threshold) and peak width (gradient). It’s funny to see that the cleaned version of the G22 retrieval just condenses to the same parameters as are used in the standard threshold/gradient methods.
- Figure 6: Would be nice to have a version of frequency vs. temperature. This could help to evaluate the impact of specular reflection on the retrievals, as specular reflection (false liquid) should be most prominent in the temperature range between -10 and -20°C.
- Final statement in the conclusion could be added: Which retrieval will the authors use in future for their studies? Can the authors make a decision and motivate it?
Minor comments:
- Line 15: There's actually not only collision freezing. I suggest stating that INP needs to be involved in heterogeneous nucleation of ice at temperatures between 0°C and -40°C. Citation of Hoose and Möhler (2012, https://doi.org/10.5194/acp-12-9817-2012) should do the job.
- Line 94: really just 21? I guess it's 61, isn't it? https://www2.mmm.ucar.edu/rt/amps/information/configuration/configuration.html
- Line 129: Existence of ice at T above 0°C. The statement is actually not quite true. Melting of ice depends on the dewpoint temperature. It can well exist at higher temperatures as long as dewpoint temperature (wet-bulb or ice-bulb temperature) is below 0°C. E.g., https://doi.org/10.1175/JAS-D-20-0353.1 Thus the WLCC liquid water statement should be treated somewhat relative.
- Frequently, the term ‘cloud’ appears at positions where it should actually be placed in plural form (e.g., lines 16, 29, 48). Do variants of the English language exist, where ‘cloud’ is both, plural and singular? Or are these just typos?
- Lines 183 and 250: use citep
Citation: https://doi.org/10.5194/egusphere-2023-1085-RC2 - AC2: 'Reply on RC2', Luke Whitehead, 08 Jun 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
412 | 125 | 40 | 577 | 33 | 36 |
- HTML: 412
- PDF: 125
- XML: 40
- Total: 577
- BibTeX: 33
- EndNote: 36
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Luke Edgar Whitehead
Adrian James McDonald
Adrien Guyot
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5807 KB) - Metadata XML