the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Innovative Cloud Quantification: Deep Learning Classification and Finite Element Clustering for Ground-Based All Sky Imaging
Abstract. Accurate cloud quantification is essential in climate change research. In this work, we construct an automated computer vision framework by synergistically incorporating deep neural networks and finite element clustering to achieve robust whole sky image-based cloud classification, adaptive segmentation, and recognition under intricate illumination dynamics. A bespoke YOLOv8 architecture attains over 95 % categorical precision across four archetypal cloud varieties curated from extensive annual observations (2020) at a Tibetan highland station. Tailor-made segmentation strategies adapted to distinct cloud configurations, allied with illumination-invariant image enhancement algorithms, effectively eliminate solar interference and substantially boost quantitative performance even in illumination-adverse analysis scenarios. In comparison to traditional NRBR threshold analysis methods, the cloud quantification accuracy computed within the framework of this paper exhibits an improvement of nearly 20 %. Collectively, the methodological innovations provide an advanced solution to markedly escalate cloud quantification precision levels imperative for climate change research, while offering a paradigm for cloud analytics transferable to various meteorological stations.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(0 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-678', Anonymous Referee #1, 25 Mar 2024
Comment to “Innovative Cloud Quantification: Deep Learning Classification and Finite Element Clustering for Ground-Based All Sky Imaging”
This study provides an innovative cloud quantification method to provide relatively accurate cloud information, which is important for climate studies. It is worthy for publication with necessary modifications.
General comment
By proposing this topic, the authors should know that the definition of clouds is challenging and observations of clouds from different instruments vary a lot, making cloud information uncertain. This brings a serious issue: how could the authors provide the true information for the training? Note that this question is general for all cloud identification studies.
Introduction part:
Regarding the importance of clouds, particularly on the radiation balance via its radiative forcing, a recent review study by Zhao et al. (2023, doi: 10.1016/j.atmosres.2023.106899) is worthy to mention here.
For sentence “In essence, clouds serve as an important "sunshade" to maintain the balance of the greenhouse effect and prevent overheating of the Earth”: while the sentence is definitely correct, it is fair to mention the net cooling effect of clouds globally.
For sentence “For instance, high-level cirrus clouds mainly contribute to reflection and scattering, while low-level stratus and cumulus clouds more so cause the greenhouse effect”: This is wrong, since high cirrus clouds play warming (greenhouse) effect and low clouds play cooling effect.
For sentence “Moreover, there are considerable regional disparities in cloud amount, and pronounced differences exist in regional climate characteristics”: There are many studies regarding the regional variations of clouds which are worthy to refer here, such as a most recent study by Chi et al. (2024, doi: 10.1016/j.atmosres.2024.107316).
For image processing techniques used for cloud detection, previous studies should be introduced and cited, to identify the creativity of this study.
There are multiple previous cloud classification methods, including the machine learning algorithm, texture feature extraction, and so on, most recent studies should be mentioned or referred.
Laser radar does not necessarily have large equipment size.
2.1 Study area part
“with relatively good air quality and low atmospheric pollution levels”: I think using “with relatively good air quality” is enough.
Table 1: “Measure cloud distance” is better as “Measurable cloud distance”
3.2.3: Have similar indicators been used by other studies? If have, a few reference could be helpful.
3.4: As indicated, a proper K value is important for K-means method. How do the authors choose their K values?
Citation: https://doi.org/10.5194/egusphere-2024-678-RC1 -
AC1: 'Reply on RC1', Yinan Wang, 03 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 3, 2024
Institute of Atmospheric Physics
-
AC1: 'Reply on RC1', Yinan Wang, 03 Apr 2024
-
RC2: 'Comment on egusphere-2024-678', Anonymous Referee #2, 27 Mar 2024
The authors have proposed an end-to-end cloud recognition framework that synergistically integrates deep learning classification, adaptive image enhancement, and finite sector k-means clustering. By leveraging state-of-the-art deep learning models such as YOLOv8, the authors have achieved outstanding cloud type classification performance, with an average accuracy exceeding 98% on both public datasets and their self-built dataset from the Tibetan Plateau region. The in-depth analysis of the diurnal and seasonal variations of different cloud types provides valuable observational data support for understanding the spatiotemporal evolution of cloud systems.
However, there are some areas that require further improvement:
(1)The review of existing traditional cloud detection methods is not comprehensive enough, and a more systematic evaluation of their strengths and limitations is needed.
(2)Although data from the Tibetan Plateau site was used, the spatial representativeness is still limited due to the use of a single site. Future work should consider incorporating data from multiple regions to enhance the model's broad applicability.
(3)While the association between cloud amount and solar radiation is mentioned, no in-depth discussion is provided. It is recommended to further analyze the influence of different cloud types on solar radiation characteristics.
(4)Although the methodology is clearly presented, some details regarding equations, parameters, and symbols are not comprehensively explained, requiring further elaboration and clarification to ensure the reproducibility and transparency of the research work. Specifically:
- a) An explanation of the variables TP, FP, TN, and FN used in the evaluation metrics such as precision, recall, and F1-score, along with their calculation methods, should be provided to facilitate better understanding of the evaluation system.
- b) The details of the image enhancement algorithm for dehazing need to be thoroughly described, especially the processes for obtaining the key parameters, atmospheric light A and transmission rate t, to ensure the reproducibility of the image enhancement step.
- c) The finite sector K-means clustering segmentation strategy employs different numbers of sectors for different cloud types, but the rationale and basis for this setting are not explained. The authors should clarify the reasons behind the chosen sector numbers for each cloud type.
Citation: https://doi.org/10.5194/egusphere-2024-678-RC2 -
AC2: 'Reply on RC2', Yinan Wang, 03 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 3, 2024
Institute of Atmospheric Physics
-
RC3: 'Comment on egusphere-2024-678', Anonymous Referee #3, 08 Apr 2024
Reviewer's comments on "Innovative Cloud Quantification: Deep Learning Classification and Finite Element Clustering for Ground-Based All Sky Imaging" by Luo et al.
General comments
This paper develops a cloud classification and recognition method for ground-based all-sky imagers. The authors use deep learning to achieve high-precision cloud classification, and process different types of clouds separately to obtain cloud detection results, which are shown better than traditional algorithms. High-precision cloud detection and classification results provide fundamental dataset for understanding cloud evolution and development in Tibetan region. However, the detailed descriptions of the methods in the manuscript are not sufficient, and the evaluation of the results is not comprehensive. Some discussions and clarification are needed.
Major comments:
- In the title, abstract and introduction of the manuscript, "cloud quantification" has appeared many times, but without a clear definition.
- In the second paragraph of the introduction, the structure read somewhat chaotic. The classification method and observation instruments are also mentioned in the overview of cloud classification method. In the later description of the cloud quantification method, the advantages and disadvantages of the existing cloud quantification methods are not specifically introduced. It is suggested to reconsider the structure of this part.
- In the abstract, the traditional NRBR recognition method may be just summarized in the introduction. In the last paragraph of the introduction, it mentioned the problem of cloud identification in current algorithm, but it is not clear which specific method the author referred to?
- In Section 2.3, how are the four types of data divided in image dataset?
- In Section 3.2.1, the description of neural network design is not clear. What are the advantages of the YOLOv8 architecture in solving the research problems of the current manuscript, and why choose the framework? In addition, YOLOv8 involves the convolution part, in which the process has certain requirements on the size of the input image. Why is the input image size of 680×680 selected in this paper?
- For what reason is the training epoch set to 400? Because in Figure 4, when the epoch is greater than 200, it is found that F1 is basically unchanged, and the loss is no longer reduced.
- In the description of the evaluation metrics in Section 3.2.3, what are the validation set and test set mentioned? What the descriptions such as true positive and false negative represent? Please give more clear explanations.
- In part 3.3, how to estimate the A value and how to reflect the adaptive process in the defogging algorithm? Do you mean that each type of cloud selects a different A value to achieve adaptive?
- In figure 7, it can be seen that the contrast between cloud and clear sky is obviously enhanced after image enhancement. I am curious about whether the NRBV method is applied to the images before and after enhancement, and will the results be different? Please provide the cloud detection results of the NRBV method for the enhanced image and compare it with your new method.
- The algorithm considers using images from 9 : 00 to 16 : 00 in the day as training data, and the paper mentioned that the illumination has a great interference to the recognition results. What are the identification results before 9 : 00 and after 16 : 00 ? Are there any examples? In the cloud cover time series of Fig.8, you give the comparison between the proposed algorithm and the traditional algorithm. How do you calculate the improvement in accuracy of the new method compared to the traditional algorithm?
- The classification accuracy of the algorithm is very high. But I 'm interested in misclassified images. Can we know the specific reasons for these misclassifications?
Minor comments:
- In the third paragraph of the introduction, the sentence “Some studies train k-means models to swiftly cluster and recognize cloud and clear sky regions in whole sky images, improving cloud quantification speed and efficiency.” lacks literature citation.
- When the “Yangbajing Comprehensive Atmospheric Observatory” appears for the first time, it is recommended to give specific latitude and longitude coordinates.
- In Section 2.3, pay attention to the number of samples, it should be 15 samples per day instead of 16.
- In Part 2.2, what is CMOS? If it is an abbreviation, please give a full name.
- Figure 8 in “Likewise, the cloud cover recognition effect at the base of heavier clouds and the overexposed area surrounding the sun are greatly improved when the image is enhanced, as seen in Figure 8g,h.” should be changed to Figure 7.
- Some grammatical words in the article need to be checked carefully.
Citation: https://doi.org/10.5194/egusphere-2024-678-RC3 -
AC3: 'Reply on RC3', Yinan Wang, 15 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 15, 2024
Institute of Atmospheric Physics
-
RC4: 'Comment on egusphere-2024-678', Anonymous Referee #4, 12 Apr 2024
The manuscript presents a novel framework for cloud quantification using a deep neural network and finite element clustering on ground-based all-sky images. The authors have introduced a bespoke YOLOv8 architecture for cloud classification and adaptive segmentation strategies for cloud quantification. The methodology aims to enhance cloud recognition precision, which is crucial for climate change research.
Major Comments:
1. Methodology Clarity and Detail:
The description of the deep neural network architecture and the process of finite element segmentation and clustering is detailed and provides a clear understanding of the approach. However, the authors might consider including additional visual aids or flow diagrams that could further elucidate the step-by-step process, especially for readers who may not be as familiar with the technical aspects of neural networks and image segmentation.2. Dataset Representativeness:
The authors have used an extensive dataset from the Yangbajing station in Tibet. It would be beneficial to discuss the representativeness of this dataset in the context of other geographic regions or climatic conditions. If the model's applicability is limited to regions similar to the dataset's origin, this limitation should be explicitly stated.3. Validation and Testing:
The paper presents impressive classification accuracy rates. However, it would be advantageous for the authors to include additional validation, possibly through a comparison with other state-of-the-art methods or by applying the framework to an independent dataset to verify its generalizability.4. Impact of Illumination Dynamics:
While the paper addresses illumination dynamics and their impact on cloud quantification, it would be interesting to see a more in-depth analysis of how different lighting conditions, such as those during sunrise and sunset, affect the accuracy of cloud detection.5. Scalability and Transferability:
The discussion on the scalability and versatility of the approach is promising. To bolster these claims, a section on potential modifications or adaptations required to apply this framework to different meteorological stations would be beneficial.Minor Comments:
1. Terminology Consistency:
Ensure consistency in terminology, especially when referring to the various neural network components and cloud types, to avoid confusion.2. Statistical Analysis:
The use of precision, recall, and F1-score is appropriate. Including additional statistical analyses, such as a confusion matrix, would provide a more comprehensive overview of the model's performance across all classes.3. Future Work:
The authors have briefly mentioned future work in improving the model's adaptability to overexposed regions. Elaborating on potential avenues for future research, such as incorporating additional atmospheric parameters or exploring the effects of climate change on cloud dynamics, would be insightful.The cited literature it is currently poor, I suggest to the authors to cite relevant studies on cirrus clouds and their importance
Citation: https://doi.org/10.5194/egusphere-2024-678-RC4 -
AC4: 'Reply on RC4', Yinan Wang, 24 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 24, 2024
Institute of Atmospheric Physics
-
AC4: 'Reply on RC4', Yinan Wang, 24 Apr 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2024-678', Anonymous Referee #1, 25 Mar 2024
Comment to “Innovative Cloud Quantification: Deep Learning Classification and Finite Element Clustering for Ground-Based All Sky Imaging”
This study provides an innovative cloud quantification method to provide relatively accurate cloud information, which is important for climate studies. It is worthy for publication with necessary modifications.
General comment
By proposing this topic, the authors should know that the definition of clouds is challenging and observations of clouds from different instruments vary a lot, making cloud information uncertain. This brings a serious issue: how could the authors provide the true information for the training? Note that this question is general for all cloud identification studies.
Introduction part:
Regarding the importance of clouds, particularly on the radiation balance via its radiative forcing, a recent review study by Zhao et al. (2023, doi: 10.1016/j.atmosres.2023.106899) is worthy to mention here.
For sentence “In essence, clouds serve as an important "sunshade" to maintain the balance of the greenhouse effect and prevent overheating of the Earth”: while the sentence is definitely correct, it is fair to mention the net cooling effect of clouds globally.
For sentence “For instance, high-level cirrus clouds mainly contribute to reflection and scattering, while low-level stratus and cumulus clouds more so cause the greenhouse effect”: This is wrong, since high cirrus clouds play warming (greenhouse) effect and low clouds play cooling effect.
For sentence “Moreover, there are considerable regional disparities in cloud amount, and pronounced differences exist in regional climate characteristics”: There are many studies regarding the regional variations of clouds which are worthy to refer here, such as a most recent study by Chi et al. (2024, doi: 10.1016/j.atmosres.2024.107316).
For image processing techniques used for cloud detection, previous studies should be introduced and cited, to identify the creativity of this study.
There are multiple previous cloud classification methods, including the machine learning algorithm, texture feature extraction, and so on, most recent studies should be mentioned or referred.
Laser radar does not necessarily have large equipment size.
2.1 Study area part
“with relatively good air quality and low atmospheric pollution levels”: I think using “with relatively good air quality” is enough.
Table 1: “Measure cloud distance” is better as “Measurable cloud distance”
3.2.3: Have similar indicators been used by other studies? If have, a few reference could be helpful.
3.4: As indicated, a proper K value is important for K-means method. How do the authors choose their K values?
Citation: https://doi.org/10.5194/egusphere-2024-678-RC1 -
AC1: 'Reply on RC1', Yinan Wang, 03 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 3, 2024
Institute of Atmospheric Physics
-
AC1: 'Reply on RC1', Yinan Wang, 03 Apr 2024
-
RC2: 'Comment on egusphere-2024-678', Anonymous Referee #2, 27 Mar 2024
The authors have proposed an end-to-end cloud recognition framework that synergistically integrates deep learning classification, adaptive image enhancement, and finite sector k-means clustering. By leveraging state-of-the-art deep learning models such as YOLOv8, the authors have achieved outstanding cloud type classification performance, with an average accuracy exceeding 98% on both public datasets and their self-built dataset from the Tibetan Plateau region. The in-depth analysis of the diurnal and seasonal variations of different cloud types provides valuable observational data support for understanding the spatiotemporal evolution of cloud systems.
However, there are some areas that require further improvement:
(1)The review of existing traditional cloud detection methods is not comprehensive enough, and a more systematic evaluation of their strengths and limitations is needed.
(2)Although data from the Tibetan Plateau site was used, the spatial representativeness is still limited due to the use of a single site. Future work should consider incorporating data from multiple regions to enhance the model's broad applicability.
(3)While the association between cloud amount and solar radiation is mentioned, no in-depth discussion is provided. It is recommended to further analyze the influence of different cloud types on solar radiation characteristics.
(4)Although the methodology is clearly presented, some details regarding equations, parameters, and symbols are not comprehensively explained, requiring further elaboration and clarification to ensure the reproducibility and transparency of the research work. Specifically:
- a) An explanation of the variables TP, FP, TN, and FN used in the evaluation metrics such as precision, recall, and F1-score, along with their calculation methods, should be provided to facilitate better understanding of the evaluation system.
- b) The details of the image enhancement algorithm for dehazing need to be thoroughly described, especially the processes for obtaining the key parameters, atmospheric light A and transmission rate t, to ensure the reproducibility of the image enhancement step.
- c) The finite sector K-means clustering segmentation strategy employs different numbers of sectors for different cloud types, but the rationale and basis for this setting are not explained. The authors should clarify the reasons behind the chosen sector numbers for each cloud type.
Citation: https://doi.org/10.5194/egusphere-2024-678-RC2 -
AC2: 'Reply on RC2', Yinan Wang, 03 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 3, 2024
Institute of Atmospheric Physics
-
RC3: 'Comment on egusphere-2024-678', Anonymous Referee #3, 08 Apr 2024
Reviewer's comments on "Innovative Cloud Quantification: Deep Learning Classification and Finite Element Clustering for Ground-Based All Sky Imaging" by Luo et al.
General comments
This paper develops a cloud classification and recognition method for ground-based all-sky imagers. The authors use deep learning to achieve high-precision cloud classification, and process different types of clouds separately to obtain cloud detection results, which are shown better than traditional algorithms. High-precision cloud detection and classification results provide fundamental dataset for understanding cloud evolution and development in Tibetan region. However, the detailed descriptions of the methods in the manuscript are not sufficient, and the evaluation of the results is not comprehensive. Some discussions and clarification are needed.
Major comments:
- In the title, abstract and introduction of the manuscript, "cloud quantification" has appeared many times, but without a clear definition.
- In the second paragraph of the introduction, the structure read somewhat chaotic. The classification method and observation instruments are also mentioned in the overview of cloud classification method. In the later description of the cloud quantification method, the advantages and disadvantages of the existing cloud quantification methods are not specifically introduced. It is suggested to reconsider the structure of this part.
- In the abstract, the traditional NRBR recognition method may be just summarized in the introduction. In the last paragraph of the introduction, it mentioned the problem of cloud identification in current algorithm, but it is not clear which specific method the author referred to?
- In Section 2.3, how are the four types of data divided in image dataset?
- In Section 3.2.1, the description of neural network design is not clear. What are the advantages of the YOLOv8 architecture in solving the research problems of the current manuscript, and why choose the framework? In addition, YOLOv8 involves the convolution part, in which the process has certain requirements on the size of the input image. Why is the input image size of 680×680 selected in this paper?
- For what reason is the training epoch set to 400? Because in Figure 4, when the epoch is greater than 200, it is found that F1 is basically unchanged, and the loss is no longer reduced.
- In the description of the evaluation metrics in Section 3.2.3, what are the validation set and test set mentioned? What the descriptions such as true positive and false negative represent? Please give more clear explanations.
- In part 3.3, how to estimate the A value and how to reflect the adaptive process in the defogging algorithm? Do you mean that each type of cloud selects a different A value to achieve adaptive?
- In figure 7, it can be seen that the contrast between cloud and clear sky is obviously enhanced after image enhancement. I am curious about whether the NRBV method is applied to the images before and after enhancement, and will the results be different? Please provide the cloud detection results of the NRBV method for the enhanced image and compare it with your new method.
- The algorithm considers using images from 9 : 00 to 16 : 00 in the day as training data, and the paper mentioned that the illumination has a great interference to the recognition results. What are the identification results before 9 : 00 and after 16 : 00 ? Are there any examples? In the cloud cover time series of Fig.8, you give the comparison between the proposed algorithm and the traditional algorithm. How do you calculate the improvement in accuracy of the new method compared to the traditional algorithm?
- The classification accuracy of the algorithm is very high. But I 'm interested in misclassified images. Can we know the specific reasons for these misclassifications?
Minor comments:
- In the third paragraph of the introduction, the sentence “Some studies train k-means models to swiftly cluster and recognize cloud and clear sky regions in whole sky images, improving cloud quantification speed and efficiency.” lacks literature citation.
- When the “Yangbajing Comprehensive Atmospheric Observatory” appears for the first time, it is recommended to give specific latitude and longitude coordinates.
- In Section 2.3, pay attention to the number of samples, it should be 15 samples per day instead of 16.
- In Part 2.2, what is CMOS? If it is an abbreviation, please give a full name.
- Figure 8 in “Likewise, the cloud cover recognition effect at the base of heavier clouds and the overexposed area surrounding the sun are greatly improved when the image is enhanced, as seen in Figure 8g,h.” should be changed to Figure 7.
- Some grammatical words in the article need to be checked carefully.
Citation: https://doi.org/10.5194/egusphere-2024-678-RC3 -
AC3: 'Reply on RC3', Yinan Wang, 15 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 15, 2024
Institute of Atmospheric Physics
-
RC4: 'Comment on egusphere-2024-678', Anonymous Referee #4, 12 Apr 2024
The manuscript presents a novel framework for cloud quantification using a deep neural network and finite element clustering on ground-based all-sky images. The authors have introduced a bespoke YOLOv8 architecture for cloud classification and adaptive segmentation strategies for cloud quantification. The methodology aims to enhance cloud recognition precision, which is crucial for climate change research.
Major Comments:
1. Methodology Clarity and Detail:
The description of the deep neural network architecture and the process of finite element segmentation and clustering is detailed and provides a clear understanding of the approach. However, the authors might consider including additional visual aids or flow diagrams that could further elucidate the step-by-step process, especially for readers who may not be as familiar with the technical aspects of neural networks and image segmentation.2. Dataset Representativeness:
The authors have used an extensive dataset from the Yangbajing station in Tibet. It would be beneficial to discuss the representativeness of this dataset in the context of other geographic regions or climatic conditions. If the model's applicability is limited to regions similar to the dataset's origin, this limitation should be explicitly stated.3. Validation and Testing:
The paper presents impressive classification accuracy rates. However, it would be advantageous for the authors to include additional validation, possibly through a comparison with other state-of-the-art methods or by applying the framework to an independent dataset to verify its generalizability.4. Impact of Illumination Dynamics:
While the paper addresses illumination dynamics and their impact on cloud quantification, it would be interesting to see a more in-depth analysis of how different lighting conditions, such as those during sunrise and sunset, affect the accuracy of cloud detection.5. Scalability and Transferability:
The discussion on the scalability and versatility of the approach is promising. To bolster these claims, a section on potential modifications or adaptations required to apply this framework to different meteorological stations would be beneficial.Minor Comments:
1. Terminology Consistency:
Ensure consistency in terminology, especially when referring to the various neural network components and cloud types, to avoid confusion.2. Statistical Analysis:
The use of precision, recall, and F1-score is appropriate. Including additional statistical analyses, such as a confusion matrix, would provide a more comprehensive overview of the model's performance across all classes.3. Future Work:
The authors have briefly mentioned future work in improving the model's adaptability to overexposed regions. Elaborating on potential avenues for future research, such as incorporating additional atmospheric parameters or exploring the effects of climate change on cloud dynamics, would be insightful.The cited literature it is currently poor, I suggest to the authors to cite relevant studies on cirrus clouds and their importance
Citation: https://doi.org/10.5194/egusphere-2024-678-RC4 -
AC4: 'Reply on RC4', Yinan Wang, 24 Apr 2024
Dear reviewers,
Thank you very much for taking the time to read our manuscript and provide your valuable feedback. We take your comments very seriously and have carefully considered each and every question you have asked. In order to make it easier for you to view our response, we have organized the detailed answers to all questions into a separate Word document and submitted it as an attachment with this response.
You can find the responses in the attached word document. We hope that the revised manuscript will adequately answer your questions and continue to improve the quality and scientific value of the paper. If you have any other questions, please feel free to contact us.
Yours sincerely,
Yinan Wang
April 24, 2024
Institute of Atmospheric Physics
-
AC4: 'Reply on RC4', Yinan Wang, 24 Apr 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
219 | 0 | 0 | 219 | 0 | 0 |
- HTML: 219
- PDF: 0
- XML: 0
- Total: 219
- BibTeX: 0
- EndNote: 0
Viewed (geographical distribution)
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Jingxuan Luo
Yubing Pan
Debin Su
Jinhua Zhong
Lingxiao Wu
Wei Zhao
Xiaoru Hu
Zhengchao Qi
Daren Lu
Yinan Wang
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.