the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Revealing Halos Concealed by Cirrus Clouds
Abstract. Many types of halos appear in the sky. Each type of halo corresponds to the shape and orientation of ice crystals in clouds, and reflects the state of the atmosphere, therefore observing them from the ground greatly helps in understanding the state of the atmosphere. However, halos are easily obscured by the contrast of the cloud itself, making it difficult to observe them. This difficulty can be overcome by enhancing halos on images, for which various techniques have been developed. This study describes the construction of a sky-color model for halos and a new effective algorithm to reveal halos on images.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(3985 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3985 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-456', Anonymous Referee #3, 22 Nov 2023
The manuscript “Revealing Halos Concealed by Cirrus Clouds” by Y. Ayatsuka presents a new algorithm to automatically detect halos in images that are often hard to see. Generally, the manuscript is well written and fits well within the scope of AMT. There are some comments though that should be addressed before acceptance.
Main comment:
The author does a good job describing the algorithm to detect halos. However, it remains unclear why this is even important, and there should be more explanation why this work is important in the introduction. It would also be valuable to discuss if the algorithm is then used to determine the kind of halo as well as ice crystal habits typically associated with halo types etc. As it is, the text leaves the reader wonder why even care about it, especially since the halo is dependent on sun and camera positioning.
Minor comments:
Line 19: AOI is not explained
Line 19: “differently weighted” – how exactly?
Equation (3): What exactly does the equal symbol with dots stand for? Many readers will not be familiar with that symbol since it’s not commonly used (I have not seen it before).
Line 145: Why is “b” chosen depending on the pixel number of an image? Why not depending on the camera angle? If the camera angle is wider, there would be more variations in color across the image. Please elaborate.
Typos:
There’s a few instances where spaces are missing: line 65 (“Althoughthe”), line 72 (“andα”), line 105 (“processingb-g”), line 123 (“algorithmcalled”), line 157 (“autoBRare”)
Line 8: types
Line 15: “observation. and” needs to be fixed
Line 77: implemented
Line 167: developed
Citation: https://doi.org/10.5194/egusphere-2023-456-RC1 - AC2: 'Reply on RC1', Yuji Ayatsuka, 22 Feb 2024
-
RC2: 'Comment on egusphere-2023-456', Anonymous Referee #4, 31 Jan 2024
I think the work the author is doing is of value and suitable to AMT. I am aware that image processing techniques exist for the purpose of enhancing halos, but I don’t know that much documentation exists. The author takes us through some of those equations and proposes some new ones. I like where this is headed, but I don’t think that there is enough here right now to warrant publication. There could be, and I have some suggestions, so I am recommending a major revision. One of my recommendations is quite major indeed and perhaps that requires sufficient time that a resubmission decision may be necessary. I am supportive of the author’s efforts, I just don’t think this has matured enough yet.
Major Comments.
- The study offers some promising modifications to existing methods. However, it fails to fully motivate the work by explaining how conventional methods are used and how this new method should be implemented. It also fails to provide a quantitative demonstration that the proposed method outperforms those other ones. Even qualitatively in Figure 11, the performance of one or the other method seems to vary between or across scenes. I think specific to this study, to be publishable in AMT, the Boyd and Forster studies are essential. Start with a description of how those studies did or did not process images and then pose the hypothesis for what could be gained with Sky-Color Regression. Whether being paired with the Boyd/Forster detection algorithms or future machine learning methods, I expect the biggest impact from the present study will come through unsupervised detection from large data sets of sky imagery. I recommend linking this study more closely with those data sets. The major revision I would like to see is a quantitative assessment of the Boyd (or other) method using the four image processing techniques described here. To save time, the Boyd training set (presumably available online or from the authors) could be used as a benchmark. Can you show quantitatively that there is an advantage to the new approach?
- This study is very lightly referenced.
- More justification of the relevance of detecting halos is needed. Many studies, perhaps some by Alexei Korolev (ECCC), Greg McFarquhar (Univ. Oklahoma), Ben Murray (Leeds), Sergey Matrosov (Univ. Colorado), Knut Stamnes (SIT), Ping Yang (Texas A&M), and others could be used to motivate the importance of information about ice crystal habit and orientation for radiative transfer, cloud-aerosol interactions, phase partitioning, cloud radar, precipitation, secondary ice production, etc. While halos generally highlight an atypical atmospheric state, they are nonetheless relevant indicators of elusive data on ice habit and orientation.
- More review of how halo images are analyzed would benefit the paper. I don’t know the extent of this literature, but the work of, e.g., Kenneth Sassen, Walter Tape, and Jarmo Moilanen, is a place to start.
- More review of how the older B-R technique is used would also benefit the paper. For example, Section 2 reviews the use of similar methods for other purposes but not the main subject of this paper.
- Discussion is needed about the limitations of the proposed method. Perhaps this will be made clear from (a). Related, more details in Sections 2 and 3 on what these differencing techniques are actually doing would be helpful. What characteristics of halo lighting (e.g., reflective vs refractive) are they enhancing and how are they minimizing clouds in the image? What assumptions are involved?
Minor comments.
Line 8. Somewhat nitpicky, but halos refer to full circle displays (implying randomly oriented particles) and oriented displays are arcs; i.e., Fig. 1 includes both arcs and halos.
Line 42, 165. The purpose isn’t to detect halos, right? Rather, the purpose is to process images so as to enhance light associated with halos and increase the accuracy of detection algorithms. See Major comment (a).
Lines 18-20: I can’t tell if autoBR needs a citation or if it is being introduced in the present manuscript.
Line 26: It seems like this technique would work best for arcs that are refractive (i.e., disperse white light into spectral constituents like a 22 deg halo) but not for arcs that are reflective (i.e., are also white, like a parhelic circle). Is that true?
Line 76: Explored heuristically where? Earlier you stated that you developed it but no reference was provided. Is this being introduced here or can it be referenced?
Figure 5, Line 95: I’m having a trouble understanding what is being represented here. The annotation is ambiguous and the arrows aren’t explained. Can you add information to the caption and add interpretation to the text?
There are no photo credits. Were the photos all taken by the author? If they aren’t, please add credits. If they are, congrats on the odd radius halos in the middle row of Fig 11. Outstanding capture!
Editorial.
Line 7: “tyupes” to “types”
Line 15: “observation. and a”
Lines 19, 20: “weighed” to “weighted”
Line 65: add space “althoughthe”
Line 72: “andalpha”
Line 105: “processingb-g)
Citation: https://doi.org/10.5194/egusphere-2023-456-RC2 - AC1: 'Reply on RC2', Yuji Ayatsuka, 22 Feb 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-456', Anonymous Referee #3, 22 Nov 2023
The manuscript “Revealing Halos Concealed by Cirrus Clouds” by Y. Ayatsuka presents a new algorithm to automatically detect halos in images that are often hard to see. Generally, the manuscript is well written and fits well within the scope of AMT. There are some comments though that should be addressed before acceptance.
Main comment:
The author does a good job describing the algorithm to detect halos. However, it remains unclear why this is even important, and there should be more explanation why this work is important in the introduction. It would also be valuable to discuss if the algorithm is then used to determine the kind of halo as well as ice crystal habits typically associated with halo types etc. As it is, the text leaves the reader wonder why even care about it, especially since the halo is dependent on sun and camera positioning.
Minor comments:
Line 19: AOI is not explained
Line 19: “differently weighted” – how exactly?
Equation (3): What exactly does the equal symbol with dots stand for? Many readers will not be familiar with that symbol since it’s not commonly used (I have not seen it before).
Line 145: Why is “b” chosen depending on the pixel number of an image? Why not depending on the camera angle? If the camera angle is wider, there would be more variations in color across the image. Please elaborate.
Typos:
There’s a few instances where spaces are missing: line 65 (“Althoughthe”), line 72 (“andα”), line 105 (“processingb-g”), line 123 (“algorithmcalled”), line 157 (“autoBRare”)
Line 8: types
Line 15: “observation. and” needs to be fixed
Line 77: implemented
Line 167: developed
Citation: https://doi.org/10.5194/egusphere-2023-456-RC1 - AC2: 'Reply on RC1', Yuji Ayatsuka, 22 Feb 2024
-
RC2: 'Comment on egusphere-2023-456', Anonymous Referee #4, 31 Jan 2024
I think the work the author is doing is of value and suitable to AMT. I am aware that image processing techniques exist for the purpose of enhancing halos, but I don’t know that much documentation exists. The author takes us through some of those equations and proposes some new ones. I like where this is headed, but I don’t think that there is enough here right now to warrant publication. There could be, and I have some suggestions, so I am recommending a major revision. One of my recommendations is quite major indeed and perhaps that requires sufficient time that a resubmission decision may be necessary. I am supportive of the author’s efforts, I just don’t think this has matured enough yet.
Major Comments.
- The study offers some promising modifications to existing methods. However, it fails to fully motivate the work by explaining how conventional methods are used and how this new method should be implemented. It also fails to provide a quantitative demonstration that the proposed method outperforms those other ones. Even qualitatively in Figure 11, the performance of one or the other method seems to vary between or across scenes. I think specific to this study, to be publishable in AMT, the Boyd and Forster studies are essential. Start with a description of how those studies did or did not process images and then pose the hypothesis for what could be gained with Sky-Color Regression. Whether being paired with the Boyd/Forster detection algorithms or future machine learning methods, I expect the biggest impact from the present study will come through unsupervised detection from large data sets of sky imagery. I recommend linking this study more closely with those data sets. The major revision I would like to see is a quantitative assessment of the Boyd (or other) method using the four image processing techniques described here. To save time, the Boyd training set (presumably available online or from the authors) could be used as a benchmark. Can you show quantitatively that there is an advantage to the new approach?
- This study is very lightly referenced.
- More justification of the relevance of detecting halos is needed. Many studies, perhaps some by Alexei Korolev (ECCC), Greg McFarquhar (Univ. Oklahoma), Ben Murray (Leeds), Sergey Matrosov (Univ. Colorado), Knut Stamnes (SIT), Ping Yang (Texas A&M), and others could be used to motivate the importance of information about ice crystal habit and orientation for radiative transfer, cloud-aerosol interactions, phase partitioning, cloud radar, precipitation, secondary ice production, etc. While halos generally highlight an atypical atmospheric state, they are nonetheless relevant indicators of elusive data on ice habit and orientation.
- More review of how halo images are analyzed would benefit the paper. I don’t know the extent of this literature, but the work of, e.g., Kenneth Sassen, Walter Tape, and Jarmo Moilanen, is a place to start.
- More review of how the older B-R technique is used would also benefit the paper. For example, Section 2 reviews the use of similar methods for other purposes but not the main subject of this paper.
- Discussion is needed about the limitations of the proposed method. Perhaps this will be made clear from (a). Related, more details in Sections 2 and 3 on what these differencing techniques are actually doing would be helpful. What characteristics of halo lighting (e.g., reflective vs refractive) are they enhancing and how are they minimizing clouds in the image? What assumptions are involved?
Minor comments.
Line 8. Somewhat nitpicky, but halos refer to full circle displays (implying randomly oriented particles) and oriented displays are arcs; i.e., Fig. 1 includes both arcs and halos.
Line 42, 165. The purpose isn’t to detect halos, right? Rather, the purpose is to process images so as to enhance light associated with halos and increase the accuracy of detection algorithms. See Major comment (a).
Lines 18-20: I can’t tell if autoBR needs a citation or if it is being introduced in the present manuscript.
Line 26: It seems like this technique would work best for arcs that are refractive (i.e., disperse white light into spectral constituents like a 22 deg halo) but not for arcs that are reflective (i.e., are also white, like a parhelic circle). Is that true?
Line 76: Explored heuristically where? Earlier you stated that you developed it but no reference was provided. Is this being introduced here or can it be referenced?
Figure 5, Line 95: I’m having a trouble understanding what is being represented here. The annotation is ambiguous and the arrows aren’t explained. Can you add information to the caption and add interpretation to the text?
There are no photo credits. Were the photos all taken by the author? If they aren’t, please add credits. If they are, congrats on the odd radius halos in the middle row of Fig 11. Outstanding capture!
Editorial.
Line 7: “tyupes” to “types”
Line 15: “observation. and a”
Lines 19, 20: “weighed” to “weighted”
Line 65: add space “althoughthe”
Line 72: “andalpha”
Line 105: “processingb-g)
Citation: https://doi.org/10.5194/egusphere-2023-456-RC2 - AC1: 'Reply on RC2', Yuji Ayatsuka, 22 Feb 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
662 | 99 | 35 | 796 | 16 | 18 |
- HTML: 662
- PDF: 99
- XML: 35
- Total: 796
- BibTeX: 16
- EndNote: 18
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Yuji Ayatsuka
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3985 KB) - Metadata XML