the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Using machine learning algorithm to retrieve cloud fraction based on FY-4A AGRI observations
Abstract. Cloud fraction as a vital component of meteorological satellite products plays an essential role in environmental monitoring, disaster detection, climate analysis and other research areas. A long short-term memory (LSTM) machine learning algorithm is used in this paper to retrieve the cloud fraction of AGRI (Advanced Geosynchronous Radiation Imager) onboard FY-4A satellite based on its full-disc level-1 radiance observation. Correction has been made subsequently to the retrieved cloud fraction in areas where solar glint occurs using a correction curve fitted with sun-glint angle as weight. The algorithm includes two steps: the cloud detection is conducted firstly for each AGRI field of view to identify whether it is clear sky, partial cloud or overcast cloud coverage within the observation field. Then the cloud fraction is retrieved for the scene identified as partly cloudy. The 2B-CLDCLASS-LIDAR cloud fraction product from Cloudsat& CALIPSO active remote sensing satellite is employed as the truth to assess the accuracy of the retrieval algorithm. Comparison with the operational AGRI level 2 cloud fraction product is also conducted at the same time. During daytime, the probability of detection (POD) for clear sky, partly cloudy, and overcast scenes in the official operational cloud detection product were 0.5359, 0.7041, and 0.7826, respectively. The POD for cloud detection using the LSTM algorithm were 0.8294, 0.7223, and 0.8435. While the operational product often misclassified clear sky scenes as cloudy, the LSTM algorithm improved the discrimination of clear sky scenes, albeit with a higher false alarm rate compared to the operational product. For partly cloudy scenes, the mean error (ME) and root-mean-square error (RMSE) of the operational product were 0.2374 and 0.3269. The LSTM algorithm exhibited lower ME (0.1134) and RMSE (0.1897) than the operational product. The large reflectance in the sun-glint region resulted in significant cloud fraction retrieval errors using the LSTM algorithm. However, after applying the correction, the accuracy of cloud cover retrieval in this region greatly improved. During nighttime, the LSTM model demonstrated improved POD for clear sky and partly cloudy scenes compared to the operational product, while maintaining a similar POD value for overcast scenes and a lower false alarm rate. For partly cloudy scenes at night, the operational product exhibited a positive mean error, indicating an overestimation of cloud cover, whereas the LSTM model showed a negative mean error, indicating an underestimation of cloud cover. The LSTM model also exhibited a lower RMSE compared to the operational product.
- Preprint
(1171 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-977', Anonymous Referee #1, 30 Jun 2024
General comments:
In this manuscript, the authors present a machine learning based cloud cover and fraction retrieval framework from AGRI observations onboard FY-4 satellites using LSTM network. As a reviewer who did the initial assessment before it came to a full review, I find the manuscript largely improved by the authors with the majority of my concerns addressed. However, there still exist some minor issues in the contents that could possibly confuse the readers. Therefore, I would suggest a minor revision before it gets published in this journal.
Specific comments:
- Line 34~35, ‘the accuracy,… greatly improved’ should be ‘the accuracy gets greatly improved’.
- Line 82, ‘is the based on’ should be ‘is based on’.
- Line 83, ‘Such as’ should be ‘For example,’ or ‘For instance’.
- Line 123, ‘It has total 14 channels’ should be ‘It has 14 channels in total’.
- Line 129, when referring to spatial resolution of satellite imagers, it should be mentioned as ‘4km at nadir’ as the pixel size varies by geolocation.
- Line 154, ‘for download’ should be ‘to download’.
- Line 163, ‘distance difference’ should be ’spatial distance’.
- Line 241, ‘hidden layer size’ can be confusing for readers whether that is the number of hidden layers, or the number of neurons on each hidden layer. Personally I would believe the number 3 refers to the number of hidden layers. But the saying on Line 238 which reads as ‘having a hidden layer that is too large’ also seems to indicate the latter meaning. Please explain in a clear way.
- Line 251~252, are both models A and B using the same settings of batch size, optimizer and loss function? As far as I know, classification models (model A) and regression models (model B) usually use different loss functions.
- Line 278, ‘convercast’ should be ‘overcast’.
- Line 309, ‘inverted’ should be ‘retrieved’.
- Line 310, ‘in this text’ should be ‘in this paper’.
- Line 311, there should be an ‘and’ before ‘operational products’.
- Line 315, again ‘inverted’, recommend the authors revise all ‘invert’ when referring to retrievals.
- Line 349~352, the description of components (lines, dots, etc.) on Figure 2(b) should also appear in the caption of Figure 2 so that readers don’t need to look for the sentences when looking at the figure itself.
- Table 4, what truth value are the statistics calculated against? According to Line 190~191 in this manuscript, 2B-CLDCLASS-LIDAR data is only available before July 2019.
Citation: https://doi.org/10.5194/egusphere-2024-977-RC1 -
AC1: 'Reply on RC1', Jinyi Xia, 30 Jun 2024
Thank you for pointing out the issues, here are my responses to all the comments:
1-7: I will make corrections in the next round of manuscript submission.
8: 'hidden layer size' refers to the number of hidden layers. I will clarify this in the next manuscript submission.
9: The batch size for the classification model is 500 (line 251), and for the regression model is 60 (line 266). Both models use the Adam optimizer (lines 256 and 267), the classification model uses cross-entropy as the loss function (line 252), and the regression model uses mean squared error (line 268).
10-15: I will make corrections in the next round of manuscript submission.
16: Tables 4 and 5 present errors in the sun glint region during June to July 2019, with the ground truth being 2B-CLDCLASS-LIDAR data. Line 190 contains an error. Data before August 2019 is available.
Citation: https://doi.org/10.5194/egusphere-2024-977-AC1
-
RC2: 'Comment on egusphere-2024-977', Anonymous Referee #3, 30 Jun 2024
Review comments for “Using machine learning algorithm 1 to retrieve cloud fraction based on FY-4A AGRI observations” by Xia, J. and Guan, L.
General comments:
This manuscript presents a novel application of a Long Short-Term Memory (LSTM) machine learning algorithm to retrieve cloud fraction using data from the FY-4A AGRI satellite. The study addresses an important aspect of meteorological research, providing a valuable contribution to the field by improving cloud detection and fraction retrieval accuracy. The manuscript is well-structured and comprehensive, detailing the methodology, data sources, and validation processes effectively. However, there are areas where the manuscript could be further strengthened. The introduction could benefit from a deeper explanation of why LSTM models are particularly well-suited for this application compared to other machine learning techniques. Additionally, while the results are promising, a more thorough discussion of potential biases and limitations, especially regarding the higher false alarm rates observed with the LSTM model, would provide a balanced view. Including confidence intervals or significance tests for the reported metrics would also add to the rigor of the statistical analysis.
The manuscript has the potential to be a valuable resource for researchers and practitioners in meteorology and environmental monitoring, but significant improvements should be made before.
Specific Comments
- The manuscript primarily discusses the method, but the abstract and the content focuses only on the results. There is insufficient explanation about the method (machine learning and correction), its characteristics, and why it leads to improvements.
- The manuscript lacks a discussion about the impact of different machine learning structures on the retrieval of cloud fraction. Please provide more results regarding this.
- The data resolution of CloudSat and CALIPSO is not consistent with AGRI. It is crucial to discuss this dataset uncertainty and its impact on the retrieval.
- The captions of the figures and tables are too simplified and do not provide essential information.
- Line 63: What is the spatial resolution of the instrument?
- Lines 65-81: Summarize these old and classic references, emphasizing only the important ones.
- Lines 98-113: The logic is not clear. Why did the authors conduct this work, and what is the advantage of the LSTM compared to other frameworks?
- Table 1: A reference should be provided.
- Lines 149-155: Since cloud fraction is important for building the training dataset, the algorithm for the joint product should be introduced. Additionally, how the uncertainty in the dataset affects the retrieval should also be discussed.
- Lines 253-258: When discussing LSTM, the use of time series information is implied. Is this the case in your study?
Minor corrections:
- Lines 11, 43,62, 275 and etc., add a comma before the “and” or “or” in a list
- Line 15, “Correction” should be “Corrections”
- Line 49, “determined” is too strong, it should be “influenced”
- Line 66, better to replace “encompassing” with “including”
- Line 81, it should be made clear that why “season” and “climate” will influence the thresholds?
- Line 203, “and demonstrates” not correct, or just remove the period
- Line 206, remove the extra comma
- Line 276, remove the quotation mark, and add “the” before the POD and FAR
- Line 283, “retrieving cloud fraction”-> “retrieving cloud fractions”
- Line 301, “Operational” -> “operational”
- Lines 308-309, Line 317, and etc. , why did you use “cloud amount” and “inverted” instead of “cloud fractions” and “retrieved”?
- Lines 467-471, use the abbr. “FAR” instead of “false alarm rate”
- Lines 473-478, it is not appropriate to compare the “algorithm” to the “operation product”, so replace the “than” in Line 475 and 478 with “compared to”
Citation: https://doi.org/10.5194/egusphere-2024-977-RC2 -
AC2: 'Reply on RC2', Jinyi Xia, 30 Jun 2024
Thank you for your comments on my manuscript, here is my response.
- I will provide explanations on the methods (machine learning and corrections) in the next manuscript update.
- I conducted experiments on the impact of different machine learning algorithms on cloud fraction retrieval, and I will include this content in the manuscript.
- The data resolution of CloudSat and CALIPSO differs from AGRI. FY-4A AGRI and 2B-CLDCLASS-LIDAR data are spatially matched with a distance and time difference of under 1.5 km and 15 minutes respectively. For effective collocation of 2B-CLDCLASS-LIDAR cloud fraction data within AGRI pixels, a minimum of two 2B-CLDCLASS-LIDAR pixels are needed within each AGRI field of view. The average cloud fraction of these pixels is used for that AGRI pixel.
- I will modify the titles of the figures and tables in the next manuscript upload.
- The spatial resolution of FY4A AGRI is 4km; I will add this information.
- Thank you for your suggestions, I will make corrections according to your feedback.
- I only pointed out the shortcomings of the threshold method, with LSTM performing better than the threshold method. I did not describe the advantages of LSTM compared to other frameworks. I will amend this based on the feedback.
- I will provide the reference for Table 1.
- I will add an introduction to the joint product algorithm.
- This paper only uses the LSTM algorithm for classification and regression, excluding time series information.
For minor corrections, I will carefully make revisions and thoroughly recheck the entire text.
Citation: https://doi.org/10.5194/egusphere-2024-977-AC2
-
RC3: 'Comment on egusphere-2024-977', Anonymous Referee #2, 01 Jul 2024
General comments:
A simple LSTM performs detection and cloud fraction regression, using FY-4A AGRI data as input, with labels based on 2B-CLDCLASS-LIDAR. The task is important for meteorological and climate applications, and the authors do an excellent job of providing motivation for their work. However, the proposed machine learning method is unusual for a few reasons, and the unusual aspects are not justified by much explanation or comparison with other, more standard machine learning practices.
If the authors complete some major changes, this work has the potential to be valuable.
Line-by-line comments:
- 191-196: Why do you only use data from May and June 2019? FY-4A was launched earlier, so shouldn’t there be more data available? It would be good to take a random sample from a larger window of time with more seasonal variability. If there is some other reason only to use this time window, it needs to be stated here.
- 239-241: “Typically, the optimal size of the hidden layer is determined by trying different sizes and evaluating their performance on a validation set.” Correct. Did you do this? If so, the results should be in the paper.
- 241: Your hidden layer sizes are extremely small. Also, you need to state how many hidden layers you have. My subsequent analysis will assume you only have 1 hidden layer. Even so, this needs to be explicitly stated.
- 244-246: This is simply a restatement of the definition of batch size, and not an explanation. I’d remove both sentences. Instead, you can simply say something like “A larger batch size typically reduces the training time per epoch for complex reasons” and then cite some paper about batch size, e.g. [1].
- 254: Are the inputs to the network normalized? If not, they should be.
- 267: “a batch size of 60 is chosen due to the limited sample number in dataset B.” If you have limited data, that doesn’t mean you need to decrease the batch size. Given the extremely small size of your LSTM, I don’t think you will have a problem with overfitting, so you can safely use as large of a batch size as you have the VRAM for.
- 268: This is fine for the cloud fraction task, but you shouldn’t / can’t use MSE for the cloud detection task, as it is a classification and not a regression.
- Figure 2:
- (a) circular lines should be thicker, text should be larger
- (b) missing a legend for the colors
- (c) put the degree sign ° on the tick labels in the color bar
- (d) missing a legend for the colors, and you should use different marker types (e.g. triangles and circles) to help colorblind readers
Major changes requested:
- You use a temporal / recurrent network architecture (LSTM) but have no description of how the temporal aspect of your data is used by the network. You need to describe this, in detail. What are the timesteps? Is there a warmup time for your LSTM (a minimum time before it attains a stable state and a reasonable accuracy)? On the other hand, if there is no time series information in use here, you should not be using an LSTM!
- There needs to be some comparison with other models. A (single-layer?) network with only 3 hidden units is a very unusual choice. This is a tiny neural network. You could write down the full definition of this network on a piece of paper.
- Why even use a neural network at that point, instead of something like a random forest? The latter would be more interpretable, easier to train / justify, and potentially more powerful than your very small model.
- If you’re going to keep such a small network, I’d like to see a comparison with linear regression.
- How does it compare with a larger LSTM?
- Is the temporal component helpful (i.e. how does the LSTM compare with a standard MLP)?
- Why not make use of spatial context? You could consider using a convolutional neural network or a ConvLSTM.
- I think you should use a larger dataset, or at least one that isn’t restricted to one month. There is seasonal variability to clouds, and you want your algorithm to be effective year-round.
- You are comparing your network with an operational cloud product. This is not a very fair comparison, as the operational cloud product is designed with different goals and principles (and physics) in mind than 2B-CLDCLASS-LIDAR was, whereas your method is directly trained on the 2B-CLDCLASS-LIDAR product.
- You should spend more time breaking down these differences
- Having a reasonable standard baseline (like a regular neural network or a random forest) would further help contextualize your results.
References
[1] Smith, Samuel L., et al. "Don't decay the learning rate, increase the batch size." arXiv preprint arXiv:1711.00489 (2017).
Citation: https://doi.org/10.5194/egusphere-2024-977-RC3 -
AC3: 'Reply on RC3', Jinyi Xia, 16 Jul 2024
Thank you for your comments, here is my reply:
- Due to the lack of time series information in this paper, using LSTM is not reasonable, so I have switched to using the Random Forest algorithm.
- A single observation from FY4A AGRI, the northern and southern hemispheres contain data from different seasons, climates, and surface types. Therefore, the training dataset required for training the model does not need to cover a long period of time.
- Additional algorithms for the 2B-CLDCLASS-LIDAR product were provided, along with an analysis of its differences from operational products. The 2B-CLDCLASS-LIDAR product, derived from the active remote sensing instrument CPR-CALIOP, is currently the most accurate cloud fraction product. When evaluating the accuracy of cloud retrieval algorithms against operational products, using 2B-CLDCLASS-LIDAR as the reference value is the only choice.
Citation: https://doi.org/10.5194/egusphere-2024-977-AC3
Status: closed
-
RC1: 'Comment on egusphere-2024-977', Anonymous Referee #1, 30 Jun 2024
General comments:
In this manuscript, the authors present a machine learning based cloud cover and fraction retrieval framework from AGRI observations onboard FY-4 satellites using LSTM network. As a reviewer who did the initial assessment before it came to a full review, I find the manuscript largely improved by the authors with the majority of my concerns addressed. However, there still exist some minor issues in the contents that could possibly confuse the readers. Therefore, I would suggest a minor revision before it gets published in this journal.
Specific comments:
- Line 34~35, ‘the accuracy,… greatly improved’ should be ‘the accuracy gets greatly improved’.
- Line 82, ‘is the based on’ should be ‘is based on’.
- Line 83, ‘Such as’ should be ‘For example,’ or ‘For instance’.
- Line 123, ‘It has total 14 channels’ should be ‘It has 14 channels in total’.
- Line 129, when referring to spatial resolution of satellite imagers, it should be mentioned as ‘4km at nadir’ as the pixel size varies by geolocation.
- Line 154, ‘for download’ should be ‘to download’.
- Line 163, ‘distance difference’ should be ’spatial distance’.
- Line 241, ‘hidden layer size’ can be confusing for readers whether that is the number of hidden layers, or the number of neurons on each hidden layer. Personally I would believe the number 3 refers to the number of hidden layers. But the saying on Line 238 which reads as ‘having a hidden layer that is too large’ also seems to indicate the latter meaning. Please explain in a clear way.
- Line 251~252, are both models A and B using the same settings of batch size, optimizer and loss function? As far as I know, classification models (model A) and regression models (model B) usually use different loss functions.
- Line 278, ‘convercast’ should be ‘overcast’.
- Line 309, ‘inverted’ should be ‘retrieved’.
- Line 310, ‘in this text’ should be ‘in this paper’.
- Line 311, there should be an ‘and’ before ‘operational products’.
- Line 315, again ‘inverted’, recommend the authors revise all ‘invert’ when referring to retrievals.
- Line 349~352, the description of components (lines, dots, etc.) on Figure 2(b) should also appear in the caption of Figure 2 so that readers don’t need to look for the sentences when looking at the figure itself.
- Table 4, what truth value are the statistics calculated against? According to Line 190~191 in this manuscript, 2B-CLDCLASS-LIDAR data is only available before July 2019.
Citation: https://doi.org/10.5194/egusphere-2024-977-RC1 -
AC1: 'Reply on RC1', Jinyi Xia, 30 Jun 2024
Thank you for pointing out the issues, here are my responses to all the comments:
1-7: I will make corrections in the next round of manuscript submission.
8: 'hidden layer size' refers to the number of hidden layers. I will clarify this in the next manuscript submission.
9: The batch size for the classification model is 500 (line 251), and for the regression model is 60 (line 266). Both models use the Adam optimizer (lines 256 and 267), the classification model uses cross-entropy as the loss function (line 252), and the regression model uses mean squared error (line 268).
10-15: I will make corrections in the next round of manuscript submission.
16: Tables 4 and 5 present errors in the sun glint region during June to July 2019, with the ground truth being 2B-CLDCLASS-LIDAR data. Line 190 contains an error. Data before August 2019 is available.
Citation: https://doi.org/10.5194/egusphere-2024-977-AC1
-
RC2: 'Comment on egusphere-2024-977', Anonymous Referee #3, 30 Jun 2024
Review comments for “Using machine learning algorithm 1 to retrieve cloud fraction based on FY-4A AGRI observations” by Xia, J. and Guan, L.
General comments:
This manuscript presents a novel application of a Long Short-Term Memory (LSTM) machine learning algorithm to retrieve cloud fraction using data from the FY-4A AGRI satellite. The study addresses an important aspect of meteorological research, providing a valuable contribution to the field by improving cloud detection and fraction retrieval accuracy. The manuscript is well-structured and comprehensive, detailing the methodology, data sources, and validation processes effectively. However, there are areas where the manuscript could be further strengthened. The introduction could benefit from a deeper explanation of why LSTM models are particularly well-suited for this application compared to other machine learning techniques. Additionally, while the results are promising, a more thorough discussion of potential biases and limitations, especially regarding the higher false alarm rates observed with the LSTM model, would provide a balanced view. Including confidence intervals or significance tests for the reported metrics would also add to the rigor of the statistical analysis.
The manuscript has the potential to be a valuable resource for researchers and practitioners in meteorology and environmental monitoring, but significant improvements should be made before.
Specific Comments
- The manuscript primarily discusses the method, but the abstract and the content focuses only on the results. There is insufficient explanation about the method (machine learning and correction), its characteristics, and why it leads to improvements.
- The manuscript lacks a discussion about the impact of different machine learning structures on the retrieval of cloud fraction. Please provide more results regarding this.
- The data resolution of CloudSat and CALIPSO is not consistent with AGRI. It is crucial to discuss this dataset uncertainty and its impact on the retrieval.
- The captions of the figures and tables are too simplified and do not provide essential information.
- Line 63: What is the spatial resolution of the instrument?
- Lines 65-81: Summarize these old and classic references, emphasizing only the important ones.
- Lines 98-113: The logic is not clear. Why did the authors conduct this work, and what is the advantage of the LSTM compared to other frameworks?
- Table 1: A reference should be provided.
- Lines 149-155: Since cloud fraction is important for building the training dataset, the algorithm for the joint product should be introduced. Additionally, how the uncertainty in the dataset affects the retrieval should also be discussed.
- Lines 253-258: When discussing LSTM, the use of time series information is implied. Is this the case in your study?
Minor corrections:
- Lines 11, 43,62, 275 and etc., add a comma before the “and” or “or” in a list
- Line 15, “Correction” should be “Corrections”
- Line 49, “determined” is too strong, it should be “influenced”
- Line 66, better to replace “encompassing” with “including”
- Line 81, it should be made clear that why “season” and “climate” will influence the thresholds?
- Line 203, “and demonstrates” not correct, or just remove the period
- Line 206, remove the extra comma
- Line 276, remove the quotation mark, and add “the” before the POD and FAR
- Line 283, “retrieving cloud fraction”-> “retrieving cloud fractions”
- Line 301, “Operational” -> “operational”
- Lines 308-309, Line 317, and etc. , why did you use “cloud amount” and “inverted” instead of “cloud fractions” and “retrieved”?
- Lines 467-471, use the abbr. “FAR” instead of “false alarm rate”
- Lines 473-478, it is not appropriate to compare the “algorithm” to the “operation product”, so replace the “than” in Line 475 and 478 with “compared to”
Citation: https://doi.org/10.5194/egusphere-2024-977-RC2 -
AC2: 'Reply on RC2', Jinyi Xia, 30 Jun 2024
Thank you for your comments on my manuscript, here is my response.
- I will provide explanations on the methods (machine learning and corrections) in the next manuscript update.
- I conducted experiments on the impact of different machine learning algorithms on cloud fraction retrieval, and I will include this content in the manuscript.
- The data resolution of CloudSat and CALIPSO differs from AGRI. FY-4A AGRI and 2B-CLDCLASS-LIDAR data are spatially matched with a distance and time difference of under 1.5 km and 15 minutes respectively. For effective collocation of 2B-CLDCLASS-LIDAR cloud fraction data within AGRI pixels, a minimum of two 2B-CLDCLASS-LIDAR pixels are needed within each AGRI field of view. The average cloud fraction of these pixels is used for that AGRI pixel.
- I will modify the titles of the figures and tables in the next manuscript upload.
- The spatial resolution of FY4A AGRI is 4km; I will add this information.
- Thank you for your suggestions, I will make corrections according to your feedback.
- I only pointed out the shortcomings of the threshold method, with LSTM performing better than the threshold method. I did not describe the advantages of LSTM compared to other frameworks. I will amend this based on the feedback.
- I will provide the reference for Table 1.
- I will add an introduction to the joint product algorithm.
- This paper only uses the LSTM algorithm for classification and regression, excluding time series information.
For minor corrections, I will carefully make revisions and thoroughly recheck the entire text.
Citation: https://doi.org/10.5194/egusphere-2024-977-AC2
-
RC3: 'Comment on egusphere-2024-977', Anonymous Referee #2, 01 Jul 2024
General comments:
A simple LSTM performs detection and cloud fraction regression, using FY-4A AGRI data as input, with labels based on 2B-CLDCLASS-LIDAR. The task is important for meteorological and climate applications, and the authors do an excellent job of providing motivation for their work. However, the proposed machine learning method is unusual for a few reasons, and the unusual aspects are not justified by much explanation or comparison with other, more standard machine learning practices.
If the authors complete some major changes, this work has the potential to be valuable.
Line-by-line comments:
- 191-196: Why do you only use data from May and June 2019? FY-4A was launched earlier, so shouldn’t there be more data available? It would be good to take a random sample from a larger window of time with more seasonal variability. If there is some other reason only to use this time window, it needs to be stated here.
- 239-241: “Typically, the optimal size of the hidden layer is determined by trying different sizes and evaluating their performance on a validation set.” Correct. Did you do this? If so, the results should be in the paper.
- 241: Your hidden layer sizes are extremely small. Also, you need to state how many hidden layers you have. My subsequent analysis will assume you only have 1 hidden layer. Even so, this needs to be explicitly stated.
- 244-246: This is simply a restatement of the definition of batch size, and not an explanation. I’d remove both sentences. Instead, you can simply say something like “A larger batch size typically reduces the training time per epoch for complex reasons” and then cite some paper about batch size, e.g. [1].
- 254: Are the inputs to the network normalized? If not, they should be.
- 267: “a batch size of 60 is chosen due to the limited sample number in dataset B.” If you have limited data, that doesn’t mean you need to decrease the batch size. Given the extremely small size of your LSTM, I don’t think you will have a problem with overfitting, so you can safely use as large of a batch size as you have the VRAM for.
- 268: This is fine for the cloud fraction task, but you shouldn’t / can’t use MSE for the cloud detection task, as it is a classification and not a regression.
- Figure 2:
- (a) circular lines should be thicker, text should be larger
- (b) missing a legend for the colors
- (c) put the degree sign ° on the tick labels in the color bar
- (d) missing a legend for the colors, and you should use different marker types (e.g. triangles and circles) to help colorblind readers
Major changes requested:
- You use a temporal / recurrent network architecture (LSTM) but have no description of how the temporal aspect of your data is used by the network. You need to describe this, in detail. What are the timesteps? Is there a warmup time for your LSTM (a minimum time before it attains a stable state and a reasonable accuracy)? On the other hand, if there is no time series information in use here, you should not be using an LSTM!
- There needs to be some comparison with other models. A (single-layer?) network with only 3 hidden units is a very unusual choice. This is a tiny neural network. You could write down the full definition of this network on a piece of paper.
- Why even use a neural network at that point, instead of something like a random forest? The latter would be more interpretable, easier to train / justify, and potentially more powerful than your very small model.
- If you’re going to keep such a small network, I’d like to see a comparison with linear regression.
- How does it compare with a larger LSTM?
- Is the temporal component helpful (i.e. how does the LSTM compare with a standard MLP)?
- Why not make use of spatial context? You could consider using a convolutional neural network or a ConvLSTM.
- I think you should use a larger dataset, or at least one that isn’t restricted to one month. There is seasonal variability to clouds, and you want your algorithm to be effective year-round.
- You are comparing your network with an operational cloud product. This is not a very fair comparison, as the operational cloud product is designed with different goals and principles (and physics) in mind than 2B-CLDCLASS-LIDAR was, whereas your method is directly trained on the 2B-CLDCLASS-LIDAR product.
- You should spend more time breaking down these differences
- Having a reasonable standard baseline (like a regular neural network or a random forest) would further help contextualize your results.
References
[1] Smith, Samuel L., et al. "Don't decay the learning rate, increase the batch size." arXiv preprint arXiv:1711.00489 (2017).
Citation: https://doi.org/10.5194/egusphere-2024-977-RC3 -
AC3: 'Reply on RC3', Jinyi Xia, 16 Jul 2024
Thank you for your comments, here is my reply:
- Due to the lack of time series information in this paper, using LSTM is not reasonable, so I have switched to using the Random Forest algorithm.
- A single observation from FY4A AGRI, the northern and southern hemispheres contain data from different seasons, climates, and surface types. Therefore, the training dataset required for training the model does not need to cover a long period of time.
- Additional algorithms for the 2B-CLDCLASS-LIDAR product were provided, along with an analysis of its differences from operational products. The 2B-CLDCLASS-LIDAR product, derived from the active remote sensing instrument CPR-CALIOP, is currently the most accurate cloud fraction product. When evaluating the accuracy of cloud retrieval algorithms against operational products, using 2B-CLDCLASS-LIDAR as the reference value is the only choice.
Citation: https://doi.org/10.5194/egusphere-2024-977-AC3
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
357 | 119 | 32 | 508 | 18 | 14 |
- HTML: 357
- PDF: 119
- XML: 32
- Total: 508
- BibTeX: 18
- EndNote: 14
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1