the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Infrared Radiometric Image Classification and Segmentation of Cloud Structure Using Deep-learning Framework for Ground-based Infrared Thermal Camera Observations
Abstract. Infrared thermal cameras offer reliable means of assessing atmospheric conditions by measuring the downward radiance from the sky, facilitating their usage in cloud monitoring endeavors. Precise identification and detection of clouds in images pose great challenges stemming from the indistinct boundaries inherent to cloud formations. Various methodologies for segmentation have been previously suggested. Most of them rely on color as the distinguishing criterion for cloud identification in the visible spectral domain and thus lack the ability to detect cloud structure on gray-scaled images with satisfying accuracy. In this work, we propose a new complete deep-learning framework to perform image classification and segmentation with Convolutional Neural Networks. We demonstrate the effectiveness of this technique by conducting a series of tests and validations on self-captured infrared sky images. Our findings reveal that the models can effectively differentiate between image types and accurately capture detailed cloud structure information at the pixel level, even when trained with a single binary ground-truth mask per input sample. The classifier model achieves an excellent accuracy of 99 % in image type distinction, while the segmentation model attains a mean pixel accuracy of 94 % on our dataset. We emphasize that our framework exhibits strong viability and can be used for infrared thermal ground-based cloud monitoring operations over extended durations. We expect to take advantage of this framework for astronomical applications by providing cloud cover selection criteria for ground-based photometric observations within the StarDICE experiment.
- Preprint
(13770 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-101', Anonymous Referee #1, 04 Jul 2024
The research address the challenge of improving photometric observation quality in the StarDICE experiment with the help of a long-wave infrared camera to capture sky images. A novel deep-learning framework is proposed, which combines a linear classifier and a U-Net model. The classifier helps differentiate between cloudy and clear sky images whereas the U-Net helps to identify cloud structures. The framework was tested on a self-acquired and several public datasets, and demonstrated effective classification and segmentation, generating catalogs of optical images suitable for photometric analysis despite some limitations.
The paper presents a novel and effective method. The paper was mostly well-organized and well-written. The authors provided sufficient results to prove the validity of their work.
However, there is room for improvement.
- Aside from the authors’ LWIR dataset, some benchmark RGB color image datasets, transformed into gray-scale images for evaluation, were used. The authors are suggested to use at least one or two open-source infrared sky image datasets for evaluation if available.
- While the classification and segmentation models’ performances are presented separately, it is also important to provide an evaluation of the overall framework. The classification model can occasionally fail to classify, which will affect the segmentation model’s performance as well.
- In both Tables 1 and 2, the proposed model’s segmentation performance on the LWIRISEG dataset is shown. In Table 1, the performance is compared on the LWIRISEG dataset alongside other open-source RGB datasets that were modified for this experiment. However, it is unclear why the performance on the LWIRISEG dataset differs between these two tables. Making the points clearer within the text and the caption of the tables are suggested.
Citation: https://doi.org/10.5194/egusphere-2024-101-RC1 -
AC1: 'Reply on RC1', Kélian Sommer, 28 Jul 2024
Dear Referees,
Thank you for your thorough reviews of our paper. We appreciate your valuable feedback and have carefully considered all your comments and suggestions that improve the clarity of the paper.
Below, we address your questions and comments.
Best regards,
Kélian SOMMER & Wassim KABALAN
=================================================================================Aside from the authors’ LWIR dataset, some benchmark RGB color image datasets, transformed into gray-scale images for evaluation, were used. The authors are suggested to use at least one or two open-source infrared sky image datasets for evaluation if available.
> K.S : unfortunately, after doing some research, we could not find any other publicly available dataset with IR sky image. By the way, the other goal of the paper (as well as presenting a new method) was to propose the first dataset of this kind. Therefore, the only comparison we have is to transform an RGB dataset to a gray dataset and to perform computations with it.
While the classification and segmentation models’ performances are presented separately, it is also important to provide an evaluation of the overall framework. The classification model can occasionally fail to classify, which will affect the segmentation model’s performance as well.
> K.S : thanks for this comment. We combined the two processes of the framework in a single “algorithm” to classify and then segment input images. A dedicated notebook is available in the source code for this purpose: https://github.com/ASKabalan/infrared-cloud-detection/blob/main/notebooks/Cloud_classification_segmentation.ipynb
In both Tables 1 and 2, the proposed model’s segmentation performance on the LWIRISEG dataset is shown. In Table 1, the performance is compared on the LWIRISEG dataset alongside other open-source RGB datasets that were modified for this experiment. However, it is unclear why the performance on the LWIRISEG dataset differs between these two tables. Making the points clearer within the text and the caption of the tables are suggested.
> K.S : thanks for the notice. Indeed, there was a glitch as one row corresponded to the last run of the training of the model whereas the other one was taken at a different epoch. We made the correction.
Citation: https://doi.org/10.5194/egusphere-2024-101-AC1
-
RC2: 'Comment on egusphere-2024-101', Anonymous Referee #2, 12 Jul 2024
The discussed topic – screening clouds, aiming at for better astronomical measurements – was interesting and the proposed methods seemed applicable. It must be remembered that the essential goal – ground-based detection/classification of clouds with visible or IR frequencies – is a big challenge as such, regardless of technology used. It stems already from the vague definition of cloud(s); we can imagine a row of meteorologists looking at or measuring clouds – and reporting largely deviating classifications. Especially, borders of fuzzy, smooth clouds like cirrostratus are difficult to detect, even to define. Viewing angle, solar effects and noise cause remarkable problems in recognition.
The methodology presented seems valid. The literature review was impressive. Yet another linguistic check of the text is recommended. The authors mention that AI was used in spell checking. A quick check revealed AI-like oddities in the bibliography as well, like in "Olli-Pekka, H. and OpenCV, t.: OpenCV Python packages" (OpenCV is not a person, and Olli-Pekka is a first name, not last). I was also a bit confused of the use of term "ground truth" as typically refers to most reliable data for comparison, yet probably difficult to collect or produce. Here, it seemed more like a result of an alternative, fast method, that could be hence named something else than ground truth.
I am somewhat uncertain about my judgement, as I don't actively follow this topic (cloud classification esp. for astronomy) nowadays.
Hence, for a further revision, I recommend sending this article to another reviewer.Citation: https://doi.org/10.5194/egusphere-2024-101-RC2 -
AC2: 'Reply on RC2', Kélian Sommer, 28 Jul 2024
Dear Referees,
Thank you for your thorough reviews of our paper. We appreciate your valuable feedback and have carefully considered all your comments and suggestions that improve the clarity of the paper.
Below, we address your questions and comments.
Best regards,
Kélian SOMMER & Wassim KABALAN
========================================================================================The methodology presented seems valid. The literature review was impressive. Yet another linguistic check of the text is recommended. The authors mention that AI was used in spell checking. A quick check revealed AI-like oddities in the bibliography as well, like in "Olli-Pekka, H. and OpenCV, t.: OpenCV Python packages" (OpenCV is not a person, and Olli-Pekka is a first name, not last).
> K.S : thanks for the notice, we corrected it.
I was also a bit confused of the use of term "ground truth" as typically refers to most reliable data for comparison, yet probably difficult to collect or produce. Here, it seemed more like a result of an alternative, fast method, that could be hence named something else than ground truth.
> K.S : We added a precision in the text “By \textit{ground truth}, we refer to the empirically and manually generated masks”. As you say, it is difficult to produce a real truth image as the definition of cloud in the image is somewhat empirical. However, we manually created masks and visually inspected them.
Citation: https://doi.org/10.5194/egusphere-2024-101-AC2
-
AC2: 'Reply on RC2', Kélian Sommer, 28 Jul 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-101', Anonymous Referee #1, 04 Jul 2024
The research address the challenge of improving photometric observation quality in the StarDICE experiment with the help of a long-wave infrared camera to capture sky images. A novel deep-learning framework is proposed, which combines a linear classifier and a U-Net model. The classifier helps differentiate between cloudy and clear sky images whereas the U-Net helps to identify cloud structures. The framework was tested on a self-acquired and several public datasets, and demonstrated effective classification and segmentation, generating catalogs of optical images suitable for photometric analysis despite some limitations.
The paper presents a novel and effective method. The paper was mostly well-organized and well-written. The authors provided sufficient results to prove the validity of their work.
However, there is room for improvement.
- Aside from the authors’ LWIR dataset, some benchmark RGB color image datasets, transformed into gray-scale images for evaluation, were used. The authors are suggested to use at least one or two open-source infrared sky image datasets for evaluation if available.
- While the classification and segmentation models’ performances are presented separately, it is also important to provide an evaluation of the overall framework. The classification model can occasionally fail to classify, which will affect the segmentation model’s performance as well.
- In both Tables 1 and 2, the proposed model’s segmentation performance on the LWIRISEG dataset is shown. In Table 1, the performance is compared on the LWIRISEG dataset alongside other open-source RGB datasets that were modified for this experiment. However, it is unclear why the performance on the LWIRISEG dataset differs between these two tables. Making the points clearer within the text and the caption of the tables are suggested.
Citation: https://doi.org/10.5194/egusphere-2024-101-RC1 -
AC1: 'Reply on RC1', Kélian Sommer, 28 Jul 2024
Dear Referees,
Thank you for your thorough reviews of our paper. We appreciate your valuable feedback and have carefully considered all your comments and suggestions that improve the clarity of the paper.
Below, we address your questions and comments.
Best regards,
Kélian SOMMER & Wassim KABALAN
=================================================================================Aside from the authors’ LWIR dataset, some benchmark RGB color image datasets, transformed into gray-scale images for evaluation, were used. The authors are suggested to use at least one or two open-source infrared sky image datasets for evaluation if available.
> K.S : unfortunately, after doing some research, we could not find any other publicly available dataset with IR sky image. By the way, the other goal of the paper (as well as presenting a new method) was to propose the first dataset of this kind. Therefore, the only comparison we have is to transform an RGB dataset to a gray dataset and to perform computations with it.
While the classification and segmentation models’ performances are presented separately, it is also important to provide an evaluation of the overall framework. The classification model can occasionally fail to classify, which will affect the segmentation model’s performance as well.
> K.S : thanks for this comment. We combined the two processes of the framework in a single “algorithm” to classify and then segment input images. A dedicated notebook is available in the source code for this purpose: https://github.com/ASKabalan/infrared-cloud-detection/blob/main/notebooks/Cloud_classification_segmentation.ipynb
In both Tables 1 and 2, the proposed model’s segmentation performance on the LWIRISEG dataset is shown. In Table 1, the performance is compared on the LWIRISEG dataset alongside other open-source RGB datasets that were modified for this experiment. However, it is unclear why the performance on the LWIRISEG dataset differs between these two tables. Making the points clearer within the text and the caption of the tables are suggested.
> K.S : thanks for the notice. Indeed, there was a glitch as one row corresponded to the last run of the training of the model whereas the other one was taken at a different epoch. We made the correction.
Citation: https://doi.org/10.5194/egusphere-2024-101-AC1
-
RC2: 'Comment on egusphere-2024-101', Anonymous Referee #2, 12 Jul 2024
The discussed topic – screening clouds, aiming at for better astronomical measurements – was interesting and the proposed methods seemed applicable. It must be remembered that the essential goal – ground-based detection/classification of clouds with visible or IR frequencies – is a big challenge as such, regardless of technology used. It stems already from the vague definition of cloud(s); we can imagine a row of meteorologists looking at or measuring clouds – and reporting largely deviating classifications. Especially, borders of fuzzy, smooth clouds like cirrostratus are difficult to detect, even to define. Viewing angle, solar effects and noise cause remarkable problems in recognition.
The methodology presented seems valid. The literature review was impressive. Yet another linguistic check of the text is recommended. The authors mention that AI was used in spell checking. A quick check revealed AI-like oddities in the bibliography as well, like in "Olli-Pekka, H. and OpenCV, t.: OpenCV Python packages" (OpenCV is not a person, and Olli-Pekka is a first name, not last). I was also a bit confused of the use of term "ground truth" as typically refers to most reliable data for comparison, yet probably difficult to collect or produce. Here, it seemed more like a result of an alternative, fast method, that could be hence named something else than ground truth.
I am somewhat uncertain about my judgement, as I don't actively follow this topic (cloud classification esp. for astronomy) nowadays.
Hence, for a further revision, I recommend sending this article to another reviewer.Citation: https://doi.org/10.5194/egusphere-2024-101-RC2 -
AC2: 'Reply on RC2', Kélian Sommer, 28 Jul 2024
Dear Referees,
Thank you for your thorough reviews of our paper. We appreciate your valuable feedback and have carefully considered all your comments and suggestions that improve the clarity of the paper.
Below, we address your questions and comments.
Best regards,
Kélian SOMMER & Wassim KABALAN
========================================================================================The methodology presented seems valid. The literature review was impressive. Yet another linguistic check of the text is recommended. The authors mention that AI was used in spell checking. A quick check revealed AI-like oddities in the bibliography as well, like in "Olli-Pekka, H. and OpenCV, t.: OpenCV Python packages" (OpenCV is not a person, and Olli-Pekka is a first name, not last).
> K.S : thanks for the notice, we corrected it.
I was also a bit confused of the use of term "ground truth" as typically refers to most reliable data for comparison, yet probably difficult to collect or produce. Here, it seemed more like a result of an alternative, fast method, that could be hence named something else than ground truth.
> K.S : We added a precision in the text “By \textit{ground truth}, we refer to the empirically and manually generated masks”. As you say, it is difficult to produce a real truth image as the definition of cloud in the image is somewhat empirical. However, we manually created masks and visually inspected them.
Citation: https://doi.org/10.5194/egusphere-2024-101-AC2
-
AC2: 'Reply on RC2', Kélian Sommer, 28 Jul 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
360 | 140 | 31 | 531 | 16 | 23 |
- HTML: 360
- PDF: 140
- XML: 31
- Total: 531
- BibTeX: 16
- EndNote: 23
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1