the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
3-D Cloud Masking Across a Broad Swath using Multi-angle Polarimetry and Deep Learning
Abstract. Understanding the 3-dimensional structure of clouds is of crucial importance to modeling our changing climate. Active sensors, such as radar and lidar, provide accurate vertical cloud profiles, but are mostly restricted to along-track sampling. Passive sensors can capture a wide swath, but struggle to see beneath cloud tops. In essence, both types of products are restricted to two dimensions: as a cross-section in the active case, and an image in the passive case. However, multi-angle sensor configurations contain implicit information about 3D structure, due to parallax and atmospheric path differences. Extracting that implicit information can be challenging, requiring computationally expensive radiative transfer techniques that must make limiting assumptions. Machine learning, as an alternative, may be able to capture some of the complexity of a full 3D radiative transfer solution with significantly less computational expense. In this work, we make three contributions towards understanding 3D cloud structure from multi-angle polarimetry. First, we introduce a large-scale, open-source dataset that fuses existing cloud products into a format more amenable to machine learning. This dataset treats multi-angle polarimetry as an input, and radar-based vertical cloud profiles as an output. Second, we describe and evaluate strong baseline machine learning models based that predict these profiles from the passive imagery. Notably, these models are trained only on center-swath labels, but can predict cloud profiles over the entire passive imagery swath. Third, we leverage the information-theoretic nature of machine learning to draw conclusions about the relative utility of various sensor configurations, including spectral channels, viewing angles, and polarimetry. These findings have implications for Earth-observing missions such as NASA's Plankton, Aerosol, Cloud-ocean Ecosystem (PACE) and Atmosphere Observing System (AOS) missions, as well as in informing future applications of computer vision to atmospheric remote sensing.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(5514 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5514 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2392', Anonymous Referee #2, 05 Feb 2024
General Comments:
This work trains a hierarchy of deep learning models to predict 3D cloud volume. It uses multi-angle, multi-spectral polarimetric imagery as the input and is trained against W-band cloud-radar measurements. This paper is a good fit for atmospheric measurement techniques, as it demonstrates a new technique for processing multi-angle observations that will benefit several upcoming satellite missions. The methodology also appears sound. However, I think that the results can be explored in more depth to provide more insight into the performance of the technique and the information content of the measurements. I therefore recommend major revisions to the paper to accommodate additional analysis suggested below.
Specific Comments:
The performance of the technique (dice score) is only categorized by altitude which provides limited insight into the factors controlling the technique and the information content of the measurements. I suggest additional analysis below to solidify the support for several key discussion points in the paper.
The results should be stratified by region/surface type and perhaps solar zenith angle. The authors mention a strong sensitivity to region/surface in principle but do not quantify it in their own results.
Furthermore, I think that the ability to detect the cloud base/volume in thick or multi-layered scenes should be explicitly explored by quantifying skill as a function of cumulative optical depth from the TOA measured by CALIOP and/or by categorizing by nth layer as identified by CLDCLASS-LIDAR (or similar) product. The authors mention this in their discussion, but it is not quantified. Quantifying this will highlight whether or not the POLDER measurements are able to extract the signal of distinct cloud layers within the same column from their multi-angle signatures and would be a valuable contribution.
Secondarily, I wonder about the role of cloud-horizontal-size, and how it varies by altitude, in setting the performance of the technique. I imagine that performance degrades for clouds that become unresolved by the multi-angle measurements.
It would be valuable if the authors could quantify this as it might provide some guidance on how the performance of the technique may extrapolate to higher-resolution measurements (e.g. PACE/AOS) as more and more clouds will be ‘resolved’ by the multi-angle imagery.
The authors mention poor performance when CALIOP observations are included in the training data due to the abundance of optically thin clouds (Line 105). This seems like an important point, and so it would be useful to describe exactly the properties of these clouds (i.e. what is the definition of ‘optically thin’?). Does this include optically thin low clouds – to which passive sensors are actually more sensitive? Or does it primarily result from very thin (tau < 0.3) cirrus clouds?
I also would suggest the opposite issue in the training of the technique may also exist, which is that POLDER is actually sensitive to many clouds that are not flagged by the radar despite having a substantial optical signal (Chan and Comiso, 2011), which may result in poor skill. My guess is that including CALIOP in the training data may actually improve skill at low altitudes with appropriate filtering of the training data.
The authors describe several different network architectures which take stacked multi-angle imagery registered to the surface. My understanding is that a CNN requires wide enough filters and a deep enough number of layers to encode non-local information. When combining multi-angle features there should be a simple relationship between the disparity in pixel-space and the required depth/width of the network. These arguments would suggest a relatively small amount of non-local information available from POLDER, supported by the relative skill of the models, but not for higher-resolution measurements. I think it would be valuable for the authors to comment on the selection of model architectures for these sorts of problems and describe what they might do for (PACE/AOS).
Technical Comments:
Line 7: The comment about “limiting assumptions” requires some justification as polarized 3D RT is an asymptotically accurate approximation of EM in the atmosphere. Computationally expensive, for sure.
Line 9: This pre-processed dataset doesn’t seem that generally relevant, given that it is a single dataset, rather than a tool that can operate on and harmonize many different datatypes. Perhaps, this should not be emphasized as strongly in the abstract?
Line 19 & 20: Some more specificity/references here would be good e.g. radiative effects / hydrological cycle etc.
Line 22: “is of utmost importance” rather than “will be”?
Line 44: References for multi-layer errors. (Mitra et al., 2021; Holz et al., 2008)
Line 46: Also worth mentioning CPR’s & CALIOP’s sensitivity issues/differences for thin, liquid clouds (Christensen et al., 2013; Chan and Comiso, 2011)
Line 42 – 56: There is a wide variety of work that employs statistical techniques to retrieve 3D cloud structure beyond Brüning et al., 2023 ranging from local nearest-neighbor matching from Barker et al., 2011 for 3D reconstruction to generative adversarial networks trained on MODIS-A-train pairs (Leinonen et al., 2019; Barker et al., 2011).
Line 48: “The introduction of uncertainty as a part of stereo algorithms must be weighed against the benefit of a wider swath.” I don’t quite understand this sentence. Perhaps, “the introduction of uncertainty into the retrieved cloud top height as a result of applying stereo.” If this is the statement, then I would disagree. All retrievals have uncertainties and stereo tends to be more precise than the mono-angle radiometric alternatives. The tradeoff is also in terms of cost. A multi-angle camera is much cheaper than a lidar.
Line 62: It would be helpful to reference some examples of these applications, even if they are for surface remote sensing, for example.
Line 65 – 70: (Castro et al., 2020) might also be referenced here as an example of high-resolution cloud stereo.
Line 63: Putting the exact POLDER resolution here would be helpful.
Line 78: 3DeepCT does not perform a segmentation task. It is not retrieving a binary mask, it is regressing for the 3D liquid water content. The more sophisticated extension of this work may be of interest in terms of model architectures which explicitly handle the multiple projections of multi-angle imagery (Ronen et al., 2022).
Line 95: Should this be “16 viewing angles from which a point on the Earth can be observed”?
Line 242: A reference for the increasing powers of two in number-of-filters per layer would be helpful.
References:
Barker, H. W., Jerg, M. P., Wehr, T., Kato, S., Donovan, D. P., and Hogan, R. J.: A 3D cloud-construction algorithm for the EarthCARE satellite mission, Quarterly Journal of the Royal Meteorological Society, 137, 1042–1058, https://doi.org/10.1002/qj.824, 2011.
Castro, E., Ishida, T., Takahashi, Y., Kubota, H., Perez, G. J., and Marciano, J. S.: Determination of Cloud-top Height through Three-dimensional Cloud Reconstruction using DIWATA-1 Data, Sci Rep, 10, 7570, https://doi.org/10.1038/s41598-020-64274-z, 2020.
Chan, M. A. and Comiso, J. C.: Cloud features detected by MODIS but not by CloudSat and CALIOP, Geophysical Research Letters, 38, https://doi.org/10.1029/2011GL050063, 2011.
Christensen, M. W., Stephens, G. L., and Lebsock, M. D.: Exposing biases in retrieved low cloud properties from CloudSat: A guide for evaluating observations and climate data, Journal of Geophysical Research: Atmospheres, 118, 12,120-12,131, https://doi.org/10.1002/2013JD020224, 2013.
Holz, R. E., Ackerman, S. A., Nagle, F. W., Frey, R., Dutcher, S., Kuehn, R. E., Vaughan, M. A., and Baum, B.: Global Moderate Resolution Imaging Spectroradiometer (MODIS) cloud detection and height evaluation using CALIOP, Journal of Geophysical Research: Atmospheres, 113, https://doi.org/10.1029/2008JD009837, 2008.
Leinonen, J., Guillaume, A., and Yuan, T.: Reconstruction of Cloud Vertical Structure With a Generative Adversarial Network, Geophysical Research Letters, 46, 7035–7044, https://doi.org/10.1029/2019GL082532, 2019.
Mitra, A., Di Girolamo, L., Hong, Y., Zhan, Y., and Mueller, K. J.: Assessment and Error Analysis of Terra-MODIS and MISR Cloud-Top Heights Through Comparison With ISS-CATS Lidar, Journal of Geophysical Research: Atmospheres, 126, e2020JD034281, https://doi.org/10.1029/2020JD034281, 2021.
Ronen, R., Holodovsky, V., and Schechner, Y. Y.: Variable Imaging Projection Cloud Scattering Tomography, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–12, https://doi.org/10.1109/TPAMI.2022.3195920, 2022.
Citation: https://doi.org/10.5194/egusphere-2023-2392-RC1 - AC1: 'Reply on RC1', Sean Foley, 27 Mar 2024
-
RC2: 'Comment on egusphere-2023-2392', Anonymous Referee #1, 06 Feb 2024
This work explores the applicability of deep learning models to predict 3D cloud masking. Using multi-spectral, multi-angle polarimetry, three models are trained, each with increasing complexity, to output a vertical profile (radar-based). The proposed approach is novel and combined with the data set makes for a compelling addition to this line of work. However, the limited analyses do not fully exhibit the performance of the models nor do they help in evaluating the influence of geometry. Additionally, the text often does not connect the physical world to the machine learning techniques and terminology and could use revisions to help the general reader understand why such an approach could be seen as beneficial as opposed to traditional ones. If some of the changes listed below are adopted, I believe the paper would be stronger and advocate the approach better. I recommend accepting the paper upon major revisions.
Specific Comments:
While the background and methodology are explained very well, the current iteration lacks a few key points and analyses. Perhaps the most glaring issue lies with the lack of mention or any explanation for Figure 3 which is arguably the most important figure to capture the performance of the models. Based on this figure alone, one could perform more analyses using histograms and heatmaps (details about further analyses below).
The Dice score, while familiar to a machine learning (ML) audience, is not a commonly used term in remote sensing or the geosciences. The physical interpretation of the Dice score is not presented in the text which makes it harder to see the use of such a metric. The authors should add more context around the similarity scores and how the Dice score plays into it along with examples of what a Dice score for a given pair of images would mean in a practical sense. In the same vein, the authors report Dice score only as a percentage, which is slightly misleading, especially since accuracy is also another metric employed here. It should be reported as a fraction/decimal and if needed, explained further by converting to percentages.
Additionally, some of the ML terminology is either not explained in full or lacks a rooting in the physical world (details in the technical comments). On the other hand, one of the central parts of a paper leveraging deep learning is the architecture and while the authors detail this in the text, figures on the architectures of the 3 models (or at least the simple-CNN and U-Net) would be more beneficial. While Figures 2 and A1 help, they are more schematic flows than architectures. It is recommended that another appendix be added to this paper to delve deeper into the ML terminology as well as more details and figures on the architectures and hyperparameters (although one could see the benefit of including a figure of the architecture within the main text).
Overall, the paper leans heavily into ML with some impressive results. However, I expected more analyses to be performed. For instance, the authors mention that training is performed on various sensor properties but do not show any quantifiable analysis on some important factors. Results based on solar geometries would a) provide insight into the model’s invariance or dependence on them, and b) quantify the effect of the geometrical corrections applied earlier in the data set. Another and more important analysis could be based on the types of clouds (stratified by phase, optical thickness, etc.), particularly relevant since instruments like CALIOP are known to have issues with thin clouds and would provide another perspective into instrument vs. algorithmic errors. Finally, there is a lack of explanation on how exactly this leads in to AOS and PACE. Since both missions will have higher resolution sensors compared to POLDER and the abstract makes mention of these missions, a more detailed discussion on how this work might translate is warranted.
Technical Comments:
- Line 78: 3DeepCT does not perform a segmentation task but rather a regression.
- Line 125: The term “test set” might be unfamiliar to those not working with machine learning. It is recommended that the authors add a sentence stating that the test set is not seen by the model during training or validation, and is, therefore, the true test of the model’s performance.
- Line 277: What experiments is this referring to? A sentence prior to this should be added to clarify as it makes subsequent text harder to understand.
- Line 281: No explanation is provided as to why the model performed worse with data augmentation. Could the authors elaborate and if so, add that to the text? Even if the exact reason is not known the readers and the applied machine learning community would appreciate more specifics on why/why not.
- Line 288: Again, the term “binary cross-entropy loss” would benefit from a brief explanation as to how it optimizes the network better than other loss functions.
- Line 293 and line 315: Which tables and figures? The reader should not be expected to hunt for the relevant table and figures
- Line 294: The first two sentences of this paragraph should be moved up to the start of section 3.4 where it is relevant and needed. Then, this should be recapped in this section to transition to the thresholding.
Citation: https://doi.org/10.5194/egusphere-2023-2392-RC2 - AC1: 'Reply on RC1', Sean Foley, 27 Mar 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2392', Anonymous Referee #2, 05 Feb 2024
General Comments:
This work trains a hierarchy of deep learning models to predict 3D cloud volume. It uses multi-angle, multi-spectral polarimetric imagery as the input and is trained against W-band cloud-radar measurements. This paper is a good fit for atmospheric measurement techniques, as it demonstrates a new technique for processing multi-angle observations that will benefit several upcoming satellite missions. The methodology also appears sound. However, I think that the results can be explored in more depth to provide more insight into the performance of the technique and the information content of the measurements. I therefore recommend major revisions to the paper to accommodate additional analysis suggested below.
Specific Comments:
The performance of the technique (dice score) is only categorized by altitude which provides limited insight into the factors controlling the technique and the information content of the measurements. I suggest additional analysis below to solidify the support for several key discussion points in the paper.
The results should be stratified by region/surface type and perhaps solar zenith angle. The authors mention a strong sensitivity to region/surface in principle but do not quantify it in their own results.
Furthermore, I think that the ability to detect the cloud base/volume in thick or multi-layered scenes should be explicitly explored by quantifying skill as a function of cumulative optical depth from the TOA measured by CALIOP and/or by categorizing by nth layer as identified by CLDCLASS-LIDAR (or similar) product. The authors mention this in their discussion, but it is not quantified. Quantifying this will highlight whether or not the POLDER measurements are able to extract the signal of distinct cloud layers within the same column from their multi-angle signatures and would be a valuable contribution.
Secondarily, I wonder about the role of cloud-horizontal-size, and how it varies by altitude, in setting the performance of the technique. I imagine that performance degrades for clouds that become unresolved by the multi-angle measurements.
It would be valuable if the authors could quantify this as it might provide some guidance on how the performance of the technique may extrapolate to higher-resolution measurements (e.g. PACE/AOS) as more and more clouds will be ‘resolved’ by the multi-angle imagery.
The authors mention poor performance when CALIOP observations are included in the training data due to the abundance of optically thin clouds (Line 105). This seems like an important point, and so it would be useful to describe exactly the properties of these clouds (i.e. what is the definition of ‘optically thin’?). Does this include optically thin low clouds – to which passive sensors are actually more sensitive? Or does it primarily result from very thin (tau < 0.3) cirrus clouds?
I also would suggest the opposite issue in the training of the technique may also exist, which is that POLDER is actually sensitive to many clouds that are not flagged by the radar despite having a substantial optical signal (Chan and Comiso, 2011), which may result in poor skill. My guess is that including CALIOP in the training data may actually improve skill at low altitudes with appropriate filtering of the training data.
The authors describe several different network architectures which take stacked multi-angle imagery registered to the surface. My understanding is that a CNN requires wide enough filters and a deep enough number of layers to encode non-local information. When combining multi-angle features there should be a simple relationship between the disparity in pixel-space and the required depth/width of the network. These arguments would suggest a relatively small amount of non-local information available from POLDER, supported by the relative skill of the models, but not for higher-resolution measurements. I think it would be valuable for the authors to comment on the selection of model architectures for these sorts of problems and describe what they might do for (PACE/AOS).
Technical Comments:
Line 7: The comment about “limiting assumptions” requires some justification as polarized 3D RT is an asymptotically accurate approximation of EM in the atmosphere. Computationally expensive, for sure.
Line 9: This pre-processed dataset doesn’t seem that generally relevant, given that it is a single dataset, rather than a tool that can operate on and harmonize many different datatypes. Perhaps, this should not be emphasized as strongly in the abstract?
Line 19 & 20: Some more specificity/references here would be good e.g. radiative effects / hydrological cycle etc.
Line 22: “is of utmost importance” rather than “will be”?
Line 44: References for multi-layer errors. (Mitra et al., 2021; Holz et al., 2008)
Line 46: Also worth mentioning CPR’s & CALIOP’s sensitivity issues/differences for thin, liquid clouds (Christensen et al., 2013; Chan and Comiso, 2011)
Line 42 – 56: There is a wide variety of work that employs statistical techniques to retrieve 3D cloud structure beyond Brüning et al., 2023 ranging from local nearest-neighbor matching from Barker et al., 2011 for 3D reconstruction to generative adversarial networks trained on MODIS-A-train pairs (Leinonen et al., 2019; Barker et al., 2011).
Line 48: “The introduction of uncertainty as a part of stereo algorithms must be weighed against the benefit of a wider swath.” I don’t quite understand this sentence. Perhaps, “the introduction of uncertainty into the retrieved cloud top height as a result of applying stereo.” If this is the statement, then I would disagree. All retrievals have uncertainties and stereo tends to be more precise than the mono-angle radiometric alternatives. The tradeoff is also in terms of cost. A multi-angle camera is much cheaper than a lidar.
Line 62: It would be helpful to reference some examples of these applications, even if they are for surface remote sensing, for example.
Line 65 – 70: (Castro et al., 2020) might also be referenced here as an example of high-resolution cloud stereo.
Line 63: Putting the exact POLDER resolution here would be helpful.
Line 78: 3DeepCT does not perform a segmentation task. It is not retrieving a binary mask, it is regressing for the 3D liquid water content. The more sophisticated extension of this work may be of interest in terms of model architectures which explicitly handle the multiple projections of multi-angle imagery (Ronen et al., 2022).
Line 95: Should this be “16 viewing angles from which a point on the Earth can be observed”?
Line 242: A reference for the increasing powers of two in number-of-filters per layer would be helpful.
References:
Barker, H. W., Jerg, M. P., Wehr, T., Kato, S., Donovan, D. P., and Hogan, R. J.: A 3D cloud-construction algorithm for the EarthCARE satellite mission, Quarterly Journal of the Royal Meteorological Society, 137, 1042–1058, https://doi.org/10.1002/qj.824, 2011.
Castro, E., Ishida, T., Takahashi, Y., Kubota, H., Perez, G. J., and Marciano, J. S.: Determination of Cloud-top Height through Three-dimensional Cloud Reconstruction using DIWATA-1 Data, Sci Rep, 10, 7570, https://doi.org/10.1038/s41598-020-64274-z, 2020.
Chan, M. A. and Comiso, J. C.: Cloud features detected by MODIS but not by CloudSat and CALIOP, Geophysical Research Letters, 38, https://doi.org/10.1029/2011GL050063, 2011.
Christensen, M. W., Stephens, G. L., and Lebsock, M. D.: Exposing biases in retrieved low cloud properties from CloudSat: A guide for evaluating observations and climate data, Journal of Geophysical Research: Atmospheres, 118, 12,120-12,131, https://doi.org/10.1002/2013JD020224, 2013.
Holz, R. E., Ackerman, S. A., Nagle, F. W., Frey, R., Dutcher, S., Kuehn, R. E., Vaughan, M. A., and Baum, B.: Global Moderate Resolution Imaging Spectroradiometer (MODIS) cloud detection and height evaluation using CALIOP, Journal of Geophysical Research: Atmospheres, 113, https://doi.org/10.1029/2008JD009837, 2008.
Leinonen, J., Guillaume, A., and Yuan, T.: Reconstruction of Cloud Vertical Structure With a Generative Adversarial Network, Geophysical Research Letters, 46, 7035–7044, https://doi.org/10.1029/2019GL082532, 2019.
Mitra, A., Di Girolamo, L., Hong, Y., Zhan, Y., and Mueller, K. J.: Assessment and Error Analysis of Terra-MODIS and MISR Cloud-Top Heights Through Comparison With ISS-CATS Lidar, Journal of Geophysical Research: Atmospheres, 126, e2020JD034281, https://doi.org/10.1029/2020JD034281, 2021.
Ronen, R., Holodovsky, V., and Schechner, Y. Y.: Variable Imaging Projection Cloud Scattering Tomography, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–12, https://doi.org/10.1109/TPAMI.2022.3195920, 2022.
Citation: https://doi.org/10.5194/egusphere-2023-2392-RC1 - AC1: 'Reply on RC1', Sean Foley, 27 Mar 2024
-
RC2: 'Comment on egusphere-2023-2392', Anonymous Referee #1, 06 Feb 2024
This work explores the applicability of deep learning models to predict 3D cloud masking. Using multi-spectral, multi-angle polarimetry, three models are trained, each with increasing complexity, to output a vertical profile (radar-based). The proposed approach is novel and combined with the data set makes for a compelling addition to this line of work. However, the limited analyses do not fully exhibit the performance of the models nor do they help in evaluating the influence of geometry. Additionally, the text often does not connect the physical world to the machine learning techniques and terminology and could use revisions to help the general reader understand why such an approach could be seen as beneficial as opposed to traditional ones. If some of the changes listed below are adopted, I believe the paper would be stronger and advocate the approach better. I recommend accepting the paper upon major revisions.
Specific Comments:
While the background and methodology are explained very well, the current iteration lacks a few key points and analyses. Perhaps the most glaring issue lies with the lack of mention or any explanation for Figure 3 which is arguably the most important figure to capture the performance of the models. Based on this figure alone, one could perform more analyses using histograms and heatmaps (details about further analyses below).
The Dice score, while familiar to a machine learning (ML) audience, is not a commonly used term in remote sensing or the geosciences. The physical interpretation of the Dice score is not presented in the text which makes it harder to see the use of such a metric. The authors should add more context around the similarity scores and how the Dice score plays into it along with examples of what a Dice score for a given pair of images would mean in a practical sense. In the same vein, the authors report Dice score only as a percentage, which is slightly misleading, especially since accuracy is also another metric employed here. It should be reported as a fraction/decimal and if needed, explained further by converting to percentages.
Additionally, some of the ML terminology is either not explained in full or lacks a rooting in the physical world (details in the technical comments). On the other hand, one of the central parts of a paper leveraging deep learning is the architecture and while the authors detail this in the text, figures on the architectures of the 3 models (or at least the simple-CNN and U-Net) would be more beneficial. While Figures 2 and A1 help, they are more schematic flows than architectures. It is recommended that another appendix be added to this paper to delve deeper into the ML terminology as well as more details and figures on the architectures and hyperparameters (although one could see the benefit of including a figure of the architecture within the main text).
Overall, the paper leans heavily into ML with some impressive results. However, I expected more analyses to be performed. For instance, the authors mention that training is performed on various sensor properties but do not show any quantifiable analysis on some important factors. Results based on solar geometries would a) provide insight into the model’s invariance or dependence on them, and b) quantify the effect of the geometrical corrections applied earlier in the data set. Another and more important analysis could be based on the types of clouds (stratified by phase, optical thickness, etc.), particularly relevant since instruments like CALIOP are known to have issues with thin clouds and would provide another perspective into instrument vs. algorithmic errors. Finally, there is a lack of explanation on how exactly this leads in to AOS and PACE. Since both missions will have higher resolution sensors compared to POLDER and the abstract makes mention of these missions, a more detailed discussion on how this work might translate is warranted.
Technical Comments:
- Line 78: 3DeepCT does not perform a segmentation task but rather a regression.
- Line 125: The term “test set” might be unfamiliar to those not working with machine learning. It is recommended that the authors add a sentence stating that the test set is not seen by the model during training or validation, and is, therefore, the true test of the model’s performance.
- Line 277: What experiments is this referring to? A sentence prior to this should be added to clarify as it makes subsequent text harder to understand.
- Line 281: No explanation is provided as to why the model performed worse with data augmentation. Could the authors elaborate and if so, add that to the text? Even if the exact reason is not known the readers and the applied machine learning community would appreciate more specifics on why/why not.
- Line 288: Again, the term “binary cross-entropy loss” would benefit from a brief explanation as to how it optimizes the network better than other loss functions.
- Line 293 and line 315: Which tables and figures? The reader should not be expected to hunt for the relevant table and figures
- Line 294: The first two sentences of this paragraph should be moved up to the start of section 3.4 where it is relevant and needed. Then, this should be recapped in this section to transition to the thresholding.
Citation: https://doi.org/10.5194/egusphere-2023-2392-RC2 - AC1: 'Reply on RC1', Sean Foley, 27 Mar 2024
Peer review completion
Journal article(s) based on this preprint
Data sets
A-Train Cloud Segmentation Dataset Sean Foley https://seabass.gsfc.nasa.gov/archive/NASA_GSFC/ATCS/ATCS_dataset/
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
434 | 138 | 31 | 603 | 22 | 22 |
- HTML: 434
- PDF: 138
- XML: 31
- Total: 603
- BibTeX: 22
- EndNote: 22
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Kirk D. Knobelspiesse
Andrew M. Sayer
James Hays
Judy Hoffman
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5514 KB) - Metadata XML