the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Infrared Radiometric Image Classification and Segmentation of Cloud Structure Using Deep-learning Framework for Ground-based Infrared Thermal Camera Observations
Abstract. Infrared thermal cameras offer reliable means of assessing atmospheric conditions by measuring the downward radiance from the sky, facilitating their usage in cloud monitoring endeavors. Precise identification and detection of clouds in images pose great challenges stemming from the indistinct boundaries inherent to cloud formations. Various methodologies for segmentation have been previously suggested. Most of them rely on color as the distinguishing criterion for cloud identification in the visible spectral domain and thus lack the ability to detect cloud structure on gray-scaled images with satisfying accuracy. In this work, we propose a new complete deep-learning framework to perform image classification and segmentation with Convolutional Neural Networks. We demonstrate the effectiveness of this technique by conducting a series of tests and validations on self-captured infrared sky images. Our findings reveal that the models can effectively differentiate between image types and accurately capture detailed cloud structure information at the pixel level, even when trained with a single binary ground-truth mask per input sample. The classifier model achieves an excellent accuracy of 99 % in image type distinction, while the segmentation model attains a mean pixel accuracy of 94 % on our dataset. We emphasize that our framework exhibits strong viability and can be used for infrared thermal ground-based cloud monitoring operations over extended durations. We expect to take advantage of this framework for astronomical applications by providing cloud cover selection criteria for ground-based photometric observations within the StarDICE experiment.
- Preprint
(13770 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 19 Jul 2024)
-
RC1: 'Comment on egusphere-2024-101', Anonymous Referee #1, 04 Jul 2024
reply
The research address the challenge of improving photometric observation quality in the StarDICE experiment with the help of a long-wave infrared camera to capture sky images. A novel deep-learning framework is proposed, which combines a linear classifier and a U-Net model. The classifier helps differentiate between cloudy and clear sky images whereas the U-Net helps to identify cloud structures. The framework was tested on a self-acquired and several public datasets, and demonstrated effective classification and segmentation, generating catalogs of optical images suitable for photometric analysis despite some limitations.
The paper presents a novel and effective method. The paper was mostly well-organized and well-written. The authors provided sufficient results to prove the validity of their work.
However, there is room for improvement.
- Aside from the authors’ LWIR dataset, some benchmark RGB color image datasets, transformed into gray-scale images for evaluation, were used. The authors are suggested to use at least one or two open-source infrared sky image datasets for evaluation if available.
- While the classification and segmentation models’ performances are presented separately, it is also important to provide an evaluation of the overall framework. The classification model can occasionally fail to classify, which will affect the segmentation model’s performance as well.
- In both Tables 1 and 2, the proposed model’s segmentation performance on the LWIRISEG dataset is shown. In Table 1, the performance is compared on the LWIRISEG dataset alongside other open-source RGB datasets that were modified for this experiment. However, it is unclear why the performance on the LWIRISEG dataset differs between these two tables. Making the points clearer within the text and the caption of the tables are suggested.
Citation: https://doi.org/10.5194/egusphere-2024-101-RC1 -
RC2: 'Comment on egusphere-2024-101', Anonymous Referee #2, 12 Jul 2024
reply
The discussed topic – screening clouds, aiming at for better astronomical measurements – was interesting and the proposed methods seemed applicable. It must be remembered that the essential goal – ground-based detection/classification of clouds with visible or IR frequencies – is a big challenge as such, regardless of technology used. It stems already from the vague definition of cloud(s); we can imagine a row of meteorologists looking at or measuring clouds – and reporting largely deviating classifications. Especially, borders of fuzzy, smooth clouds like cirrostratus are difficult to detect, even to define. Viewing angle, solar effects and noise cause remarkable problems in recognition.
The methodology presented seems valid. The literature review was impressive. Yet another linguistic check of the text is recommended. The authors mention that AI was used in spell checking. A quick check revealed AI-like oddities in the bibliography as well, like in "Olli-Pekka, H. and OpenCV, t.: OpenCV Python packages" (OpenCV is not a person, and Olli-Pekka is a first name, not last). I was also a bit confused of the use of term "ground truth" as typically refers to most reliable data for comparison, yet probably difficult to collect or produce. Here, it seemed more like a result of an alternative, fast method, that could be hence named something else than ground truth.
I am somewhat uncertain about my judgement, as I don't actively follow this topic (cloud classification esp. for astronomy) nowadays.
Hence, for a further revision, I recommend sending this article to another reviewer.Citation: https://doi.org/10.5194/egusphere-2024-101-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
243 | 65 | 18 | 326 | 12 | 13 |
- HTML: 243
- PDF: 65
- XML: 18
- Total: 326
- BibTeX: 12
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1