the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Using Deep Learning and Multi-source Remote Sensing Images to Map Landlocked Lakes in Antarctica
Abstract. Antarctic landlocked lakes' open water (LLOW) plays an important role in the Antarctic ecosystem and serves as a reliable climate indicator. However, since field surveys are currently the main method to study Antarctic landlocked lakes, the spatial and temporal distribution of landlocked lakes across Antarctica remains understudied. We first developed an automated detection workflow for Antarctic LLOW using deep learning and multi-source satellite images. The U-Net model and LLOW identification model achieved average Kappa values of 0.85 and 0.62 on testing datasets respectively, demonstrating strong spatio-temporal robustness across various study areas. We chose four typical ice-free areas located along the coastal Antarctica as our study areas. After applying our LLOW identification model to a total of 79 Landsat 8-9 images and 390 Sentinel-1 images in these four regions, we generated high spatiotemporal resolution LLOW time series from January to April between 2017 and 2021. We analyzed the fluctuation of LLOW areas in the four study areas, and found that during expansion of LLOW, over 90 % of the changes were explained by positive degree days; while during contraction, air temperature changes accounted for more than 50 % of the LLOW area fluctuations. It is shown that our model can provide long-term LLOW series products that help us better understand how lakes change under a changing climate in the future.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1972 KB)
-
Supplement
(31 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1972 KB) - Metadata XML
-
Supplement
(31 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1810', Anonymous Referee #1, 22 Nov 2023
Summary
The paper describes a semantic segmentation scheme to map landlocked lakes in Antarctica, using Landsat and Sentinel-1 satellite imagery as base data. Landsat images are segmented with a U-net, Sentinel-1 with a manually tuned threshold. The results are merged with a simple late fusion logic.
Novelty/Relevance
There isn’t any technical novelty. Methods are standard and used in somewhat ad-hoc manner without clear justification for the design.
The specific application appears to be new, I am note aware of any paper that described the specific case of land-locked antarctic lakes. That being said, the distinction is perhaps a tad contrived, there has certainly been work on detecting supra-glacial lakes, so the only difference is really to check whether a lake is surrounded by rock or by ice.
Strengths
Since the task has apparently not been studied before, there is potential to systematically map land-locked lakes with the method (it is not done at any scale, though). I am not an expert in Antarctic ecology or climate and cannot judge the relevance of this, but it is a mapping capability that hadn’t been investigated.
The proposed method works moderately well, even if the segmentation performance is not surprising or spectacular for the fairly straightforward task.
Weaknesses
Technical decisions seem somewhat arbitrary and ad-hoc. Not seriously wrong, but the described scheme is just “a way to do it”, not a carefully designed and justified “best way to do it”.
The evaluation is rather weak, using only a few small areas, and even excluding some lakes that are clearly visible within the image tiles. The study does not go beyond the four small proof-of-concept regions, there are no large-scale, wall-to-wall results.
The model validation suggests that almost all the performance is due to Landsat, whereas Sentinel-1 does not offer much except the potential to densify in time - which however is not actually done, since the Landsat segmentation acts as a hard constraint: the algorithm does not appear to allow SAR to add lake pixels.
Finding of a “decreasing trend in LLOW area” for 2017-2021 is rather trivial and expected. It would be more interesting to interpret the measured areas beyond just that obvious trend.
Presentation
Throughout, the text could be made shorter and more concise. E.g.,
- lines 215-220 are unnecessary, everything that is said there is already implied by the use of U-net
- 228-240 is a verbose, meandering way to simply say “we manually chose a global threshold by inspecting histograms”
- 250-264 says little more than that the definition of a land-locked lake is a water region surrounded by a rock region.
- etc.
The introduction is verbose and not very focussed, touching on all sorts of studies about land-locked lakes that have no relation or importance for what the paper then does.
The analysis in lines 420-440 is rather hand-wavy, I was not able to see what purpose it actually serves. It gives me the impression that the authors just performed a random analysis that was easily doable, to send a message that the maps could potentially serve some useful purpose.
There are remaining language issues, both in terms of English grammar (random example: “due to non-uniform of field surveys”) and in terms of technical expressions (e.g., “gradient disappearance” instead of “gradient vanishing”).
Technical Questions
The computational procedure is not entirely clear. It is one way of cobbling together a segmentation pipeline, but there is no clear explanation why that specific design was chosen. Whereas there are obvious concerns about it, e.g.,
- the potential benefit of Sentinel-1 for the land-cover map are not exploited
- possible correlations between optical and SAR are lost
- the fusion seems to not leverage away the (pseudo-)probabilistic segmentation scores
What is meant by saying 300x300 is the “common” patch size for U-net models? There isn’t a single, canonical patch size for training those models, and at test time they are anyways fully convolutional and not tied to a specific patch size.
I don’t understand the upsampling of the input for the U-net. No information is added by this and the effective receptive field / context window inside the network is actually reduced. So it would seem that one can reach at least equal performance, with lower computational effort, by properly training the U-net to handle the smaller images. Please explain.
Certain augmentations should be ablated and empirically justified. Conceptually it seems problematic to apply transformations like rotating or vertical flipping, as this leads to illumination directions that are implausible in real Landsat images, especially at Antarctic altitudes with low sun elevation.
Using a threshold for segmentation is of course entirely correct and sensible, if it works. But the justification that single-channel input will lead to “instability” of U-net makes no sense. Countless applications use U-nets with various single-channel inputs (SAR, panchromatic, depth,…).
The fusion step is unclear. First you argue for using SAR, and for combining it with optical data, to obtain better temporal resolution. But then, a consensus over at least 2 Landsat acquisitions is required for a potential LLOW pixel, meaning that the shortest possible resolution of everything that follows is the interval between three Landsat overpasses (if a pixel flips from ice to water between two consecutive images, you need to wait for a third image to confirm, so over the entire period you cannot say whether the pixel remained the same or thawed and froze again).
In line 290, it remains unclear how the authors “disregard” underestimated lakes. To do that one must identify them first - but the algorithm, by definition, does not know where it made an underestimation error.
I do not understand why only a tiny set of 17k pixels were “annotated for U-net”, but 225k pixels were “annotated for LLOW identification”. 17k seems an overly small training set: assuming an average lake size of, say, 600x600m that would be fewer than 50 lakes. Why would one do that if apparently another 225k annotated pixels are available?
Cohen’s kappa as a segmentation metric is discouraged (cf. [Pontius and Milliones, 2011]). I would recommend to follow best practice and show confusion matrices, F1 scores, IoU scores.
The PDD metric (equation 4) is defined in a strange manner. According to the definition that metric is exactly the same for a cold spell with two weeks of constantly at 0 degrees, or of constantly -25 degrees. Surely that would make a difference for the ice cover of the lakes? Wouldn’t it be more natural to look at the number of consecutive PDD, or to integrate the average temperature including negative values?
For the decline (Section 5.2), why use the minimum temperature? To my knowledge, and also more in line with the PDD metric used earlier, a more common indicator is the number of consecutive negative degree days, at least in studies of lake ice in Canada, the Alps, etc.
Minor Comments
Why do ice-covered lakes “magnify the warming trend”? I would think they might rather dampen it?
It is a strange claim that U-net “requires less training datasets and time” than other neural networks. That depends on who you compare to, of course there are designs that are faster than U-net (e.g., those created for mobile or embedded devices). Moreover, “U-net” isn’t a specific architecture but a whole family of networks with certain characteristics - essentially, symmetric hourglass encoder-decoder structure with dense skip connections. So some “U-nets” are a lot slower and more data-hungry than others.
Citation: https://doi.org/10.5194/egusphere-2023-1810-RC1 -
AC2: 'Reply on RC1', Guitao Shi, 15 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-1810/egusphere-2023-1810-AC2-supplement.pdf
-
RC2: 'Comment on egusphere-2023-1810', Anonymous Referee #2, 19 Dec 2023
Summary
The manuscript aims to detect landlocked lakes in Antarctica fusing optical and SAR imagery and using a U-netbased method.
Novelty/Relevance
I’m not aware of another method addressing land-locked Antarctic lakes. However, the methods used are standard methods, or in the case of thresholding the SAR imagery outdated within the field of research. The thresholding method also means that the lakes under different wind states can’t be separated. Moreover, the method can’t separate these lakes from other types of lakes, not surprising, but if that was the goal the method needs to be further improved.
Strengths
Study of land-locked lakes in Antarctica, where the method is designed to detect a specific type of lakes. Could this method be applied to other landlocked lakes to monitor water resources? If so then it might become a more viable method.
Weaknesses
The manuscript is weak in the technical details, in particular there is an apparent lack of understanding of satellite images and details around them are missing. How is the SAR data pre-processed? Is the different spatial resolution between the optical and SAR images accounted and corrected for? The incidence angle dependency in SAR data will result in higher incidence angles having a lower backscatter response. How is this accounted for in the method? Are only repeat orbits used? How will the incidence angle affect the results here?
Radar shadows (e.g. mentioned on P12 R271) are a well-known issue within SAR images. A method should be designed to deal with them or at least quantify the scale of the issue.
Wind may cause a wind roughened (high backscatter) surface, it appears that the model can only deal with low backscatter surface scattering surfaces. The method would then only be applicable in a limited number of SAR images and this limitation would hinder an operationalization or a processing chain with a larger number of images. Moreover, separation of open water areas from surrounding is challenging due to the varying backscatter values under different wind conditions. To make a method applicable to be used in an ML/DL/operational setting all wind states need to be accounted for in the method. Something that is challenging for a threshold-based method.
The text should be significantly shortened to avoid unnecessary repetitions, focus the message on what was done here (without repetitions). E.g. among other things can section 3.4 be significantly shortened by removing repetitions. As the other reviewer has already pointed out that the text is verbose and provided examples I’ll not do so further here.
Stating that “best” analysis etc has been used should be strengthened to indicate what makes this the “best”.
The correlation between PDD and lake area is long established and is not new, and neither is different lake shape evolution with different PDD evolution. The fact that lakes melt first from the edges is fundamental knowledge and not new knowledge established here. Combined can these results easily be referred to in existing literature, and this manuscript should highlight what is new knowledge aside from these well-established results.
Figure 9 show significant amount of lake area before the start of the study, in order to show time series of lake evolution at least for 3 of the sites the time series needs to be expanded to include data from at least one month earlier. How is the time series affected by removing all the troublesome images, e.g. the wind affected and those where the method failed? Lack of useful data at the start and end of the season will lead to under/over estimation of the lake area. Can the method (time series) be said to satisfactory deal with rapid changes? Or is there a need to increase sampling frequency? P20R402. How does the lack of data in December affect this? It appears that for at least the CWM side data from earlier months is needed to establish maximum lake area and probably also the LH site judging from Figure 9.
There are 3 different lakes presented here, how can this method separate the three types if they all exist in one satellite image?
P13R290-292. If the LLOW are underestimated who does this affect the biological component that has been used as an argument for conducing the entire study?
P14R317. It is stated that using the thresholding method produces large amount of errors. Establishing an improved method should therefore be a goal of this manuscript. Open water areas that are either sea water, melt lakes on the ice sheet or lakes on land is not possible using simple thresholding in the SAR images as the radar signal interprets each as water.
P21R411-420. Lake growth after temperatures start exceeding 0 has been well studied on the Greenland ice sheet for well over 10 years now. And PDD was used by, e.g. Johansson et al, (2013) to study lake evolution.
Johansson, Jansson and Brown (2013): Spatial and temporal variations in lakes on the Greenland Ice Sheet, J. Hydrology, https://doi.org/10.1016/j.jhydrol.2012.10.045
Presentation
There is a substantial amount of details about chemical and biological importance of these lakes in the introduction. Shorten this to one paragraph, up to 4 sentences and highlight instead how this work fits into lake detection using satellite images, ML/DL of lake detection, or similar. The work should be set into the context of existing science with the topic presented here not in a different scientific field.
P7R145-150. If dual-polarization data is not available, why is it being discussed here where the method is presented? This would then fit better in the introduction or the discussion.
Minor comments
P2R12 what is “reliable” in this context?
P2R16. Why did you choose ice-free areas? And what do you mean with ice-free here, no glacier ice, no inland ice sheet, no sea ice.
P4R63. Rapidly -> change to a more scientific wording.
Section 2.2. The number of optical images are introduced as 79 and then there is the discussion about removing image. It later transpires that 79 images were used. The text must be amended so that it is clear how many images were being used.
P5R103. Specify what “superior in many aspects” means.
P6R113. This is well known remove this reference to fundamental radar knowledge.
P6R117. Define high-resolution
P6R125-127. Why is there a reference attached to one of the weather stations and not the other? Is it possible to give credit to the data providers instead? The text about “temperature” is confusing, is this not actual temperatures but some kind of simulated temperatures or why has “ been used?
Chapter 3. The ground truth should be presented in the data and not as a part of the results in chapter 3. This also goes for parts of chapter 3.2 that should also be moved to the data section.
P11R226-229. Very well known (fundamental radar) remove reference.
Within this manuscript Sentinel-1 has been used, this is essential to call it or make an acronym if it’s preferred to call it Sentinel. This as there are many ESA Sentinel satellites, and there is also the Asian Sentinels.
P12R270. Many methods detect glacier outlines etc. A more thorough method should be able to at least attempt to separate ice (moving materiel) from the more stationary rocks.
P13R297-299. Well known fundamental radar knowledge, remove reference.
P16R348. Remove “
Citation: https://doi.org/10.5194/egusphere-2023-1810-RC2 -
AC1: 'Reply on RC2', Guitao Shi, 15 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-1810/egusphere-2023-1810-AC1-supplement.pdf
-
AC1: 'Reply on RC2', Guitao Shi, 15 Feb 2024
-
EC1: 'Comment on egusphere-2023-1810', Nicholas Barrand, 21 Feb 2024
I have read through the reviews and responses documents. All points are addressed but it's unclear to what extent and how these will be changed in a revised manuscript. There are lots of 'We will do this...' Please can I ask the authors to make up their promised changes, and then to revise and upload the manuscript and provide a document addressing each of the reviewer's comments, with links in the manuscript as to where changes have been made and to which comments they refer. The revised manuscript needs to clearly denote where revisions have been made (perhaps in blue text). Thanks, Nick Barrand
Citation: https://doi.org/10.5194/egusphere-2023-1810-EC1
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1810', Anonymous Referee #1, 22 Nov 2023
Summary
The paper describes a semantic segmentation scheme to map landlocked lakes in Antarctica, using Landsat and Sentinel-1 satellite imagery as base data. Landsat images are segmented with a U-net, Sentinel-1 with a manually tuned threshold. The results are merged with a simple late fusion logic.
Novelty/Relevance
There isn’t any technical novelty. Methods are standard and used in somewhat ad-hoc manner without clear justification for the design.
The specific application appears to be new, I am note aware of any paper that described the specific case of land-locked antarctic lakes. That being said, the distinction is perhaps a tad contrived, there has certainly been work on detecting supra-glacial lakes, so the only difference is really to check whether a lake is surrounded by rock or by ice.
Strengths
Since the task has apparently not been studied before, there is potential to systematically map land-locked lakes with the method (it is not done at any scale, though). I am not an expert in Antarctic ecology or climate and cannot judge the relevance of this, but it is a mapping capability that hadn’t been investigated.
The proposed method works moderately well, even if the segmentation performance is not surprising or spectacular for the fairly straightforward task.
Weaknesses
Technical decisions seem somewhat arbitrary and ad-hoc. Not seriously wrong, but the described scheme is just “a way to do it”, not a carefully designed and justified “best way to do it”.
The evaluation is rather weak, using only a few small areas, and even excluding some lakes that are clearly visible within the image tiles. The study does not go beyond the four small proof-of-concept regions, there are no large-scale, wall-to-wall results.
The model validation suggests that almost all the performance is due to Landsat, whereas Sentinel-1 does not offer much except the potential to densify in time - which however is not actually done, since the Landsat segmentation acts as a hard constraint: the algorithm does not appear to allow SAR to add lake pixels.
Finding of a “decreasing trend in LLOW area” for 2017-2021 is rather trivial and expected. It would be more interesting to interpret the measured areas beyond just that obvious trend.
Presentation
Throughout, the text could be made shorter and more concise. E.g.,
- lines 215-220 are unnecessary, everything that is said there is already implied by the use of U-net
- 228-240 is a verbose, meandering way to simply say “we manually chose a global threshold by inspecting histograms”
- 250-264 says little more than that the definition of a land-locked lake is a water region surrounded by a rock region.
- etc.
The introduction is verbose and not very focussed, touching on all sorts of studies about land-locked lakes that have no relation or importance for what the paper then does.
The analysis in lines 420-440 is rather hand-wavy, I was not able to see what purpose it actually serves. It gives me the impression that the authors just performed a random analysis that was easily doable, to send a message that the maps could potentially serve some useful purpose.
There are remaining language issues, both in terms of English grammar (random example: “due to non-uniform of field surveys”) and in terms of technical expressions (e.g., “gradient disappearance” instead of “gradient vanishing”).
Technical Questions
The computational procedure is not entirely clear. It is one way of cobbling together a segmentation pipeline, but there is no clear explanation why that specific design was chosen. Whereas there are obvious concerns about it, e.g.,
- the potential benefit of Sentinel-1 for the land-cover map are not exploited
- possible correlations between optical and SAR are lost
- the fusion seems to not leverage away the (pseudo-)probabilistic segmentation scores
What is meant by saying 300x300 is the “common” patch size for U-net models? There isn’t a single, canonical patch size for training those models, and at test time they are anyways fully convolutional and not tied to a specific patch size.
I don’t understand the upsampling of the input for the U-net. No information is added by this and the effective receptive field / context window inside the network is actually reduced. So it would seem that one can reach at least equal performance, with lower computational effort, by properly training the U-net to handle the smaller images. Please explain.
Certain augmentations should be ablated and empirically justified. Conceptually it seems problematic to apply transformations like rotating or vertical flipping, as this leads to illumination directions that are implausible in real Landsat images, especially at Antarctic altitudes with low sun elevation.
Using a threshold for segmentation is of course entirely correct and sensible, if it works. But the justification that single-channel input will lead to “instability” of U-net makes no sense. Countless applications use U-nets with various single-channel inputs (SAR, panchromatic, depth,…).
The fusion step is unclear. First you argue for using SAR, and for combining it with optical data, to obtain better temporal resolution. But then, a consensus over at least 2 Landsat acquisitions is required for a potential LLOW pixel, meaning that the shortest possible resolution of everything that follows is the interval between three Landsat overpasses (if a pixel flips from ice to water between two consecutive images, you need to wait for a third image to confirm, so over the entire period you cannot say whether the pixel remained the same or thawed and froze again).
In line 290, it remains unclear how the authors “disregard” underestimated lakes. To do that one must identify them first - but the algorithm, by definition, does not know where it made an underestimation error.
I do not understand why only a tiny set of 17k pixels were “annotated for U-net”, but 225k pixels were “annotated for LLOW identification”. 17k seems an overly small training set: assuming an average lake size of, say, 600x600m that would be fewer than 50 lakes. Why would one do that if apparently another 225k annotated pixels are available?
Cohen’s kappa as a segmentation metric is discouraged (cf. [Pontius and Milliones, 2011]). I would recommend to follow best practice and show confusion matrices, F1 scores, IoU scores.
The PDD metric (equation 4) is defined in a strange manner. According to the definition that metric is exactly the same for a cold spell with two weeks of constantly at 0 degrees, or of constantly -25 degrees. Surely that would make a difference for the ice cover of the lakes? Wouldn’t it be more natural to look at the number of consecutive PDD, or to integrate the average temperature including negative values?
For the decline (Section 5.2), why use the minimum temperature? To my knowledge, and also more in line with the PDD metric used earlier, a more common indicator is the number of consecutive negative degree days, at least in studies of lake ice in Canada, the Alps, etc.
Minor Comments
Why do ice-covered lakes “magnify the warming trend”? I would think they might rather dampen it?
It is a strange claim that U-net “requires less training datasets and time” than other neural networks. That depends on who you compare to, of course there are designs that are faster than U-net (e.g., those created for mobile or embedded devices). Moreover, “U-net” isn’t a specific architecture but a whole family of networks with certain characteristics - essentially, symmetric hourglass encoder-decoder structure with dense skip connections. So some “U-nets” are a lot slower and more data-hungry than others.
Citation: https://doi.org/10.5194/egusphere-2023-1810-RC1 -
AC2: 'Reply on RC1', Guitao Shi, 15 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-1810/egusphere-2023-1810-AC2-supplement.pdf
-
RC2: 'Comment on egusphere-2023-1810', Anonymous Referee #2, 19 Dec 2023
Summary
The manuscript aims to detect landlocked lakes in Antarctica fusing optical and SAR imagery and using a U-netbased method.
Novelty/Relevance
I’m not aware of another method addressing land-locked Antarctic lakes. However, the methods used are standard methods, or in the case of thresholding the SAR imagery outdated within the field of research. The thresholding method also means that the lakes under different wind states can’t be separated. Moreover, the method can’t separate these lakes from other types of lakes, not surprising, but if that was the goal the method needs to be further improved.
Strengths
Study of land-locked lakes in Antarctica, where the method is designed to detect a specific type of lakes. Could this method be applied to other landlocked lakes to monitor water resources? If so then it might become a more viable method.
Weaknesses
The manuscript is weak in the technical details, in particular there is an apparent lack of understanding of satellite images and details around them are missing. How is the SAR data pre-processed? Is the different spatial resolution between the optical and SAR images accounted and corrected for? The incidence angle dependency in SAR data will result in higher incidence angles having a lower backscatter response. How is this accounted for in the method? Are only repeat orbits used? How will the incidence angle affect the results here?
Radar shadows (e.g. mentioned on P12 R271) are a well-known issue within SAR images. A method should be designed to deal with them or at least quantify the scale of the issue.
Wind may cause a wind roughened (high backscatter) surface, it appears that the model can only deal with low backscatter surface scattering surfaces. The method would then only be applicable in a limited number of SAR images and this limitation would hinder an operationalization or a processing chain with a larger number of images. Moreover, separation of open water areas from surrounding is challenging due to the varying backscatter values under different wind conditions. To make a method applicable to be used in an ML/DL/operational setting all wind states need to be accounted for in the method. Something that is challenging for a threshold-based method.
The text should be significantly shortened to avoid unnecessary repetitions, focus the message on what was done here (without repetitions). E.g. among other things can section 3.4 be significantly shortened by removing repetitions. As the other reviewer has already pointed out that the text is verbose and provided examples I’ll not do so further here.
Stating that “best” analysis etc has been used should be strengthened to indicate what makes this the “best”.
The correlation between PDD and lake area is long established and is not new, and neither is different lake shape evolution with different PDD evolution. The fact that lakes melt first from the edges is fundamental knowledge and not new knowledge established here. Combined can these results easily be referred to in existing literature, and this manuscript should highlight what is new knowledge aside from these well-established results.
Figure 9 show significant amount of lake area before the start of the study, in order to show time series of lake evolution at least for 3 of the sites the time series needs to be expanded to include data from at least one month earlier. How is the time series affected by removing all the troublesome images, e.g. the wind affected and those where the method failed? Lack of useful data at the start and end of the season will lead to under/over estimation of the lake area. Can the method (time series) be said to satisfactory deal with rapid changes? Or is there a need to increase sampling frequency? P20R402. How does the lack of data in December affect this? It appears that for at least the CWM side data from earlier months is needed to establish maximum lake area and probably also the LH site judging from Figure 9.
There are 3 different lakes presented here, how can this method separate the three types if they all exist in one satellite image?
P13R290-292. If the LLOW are underestimated who does this affect the biological component that has been used as an argument for conducing the entire study?
P14R317. It is stated that using the thresholding method produces large amount of errors. Establishing an improved method should therefore be a goal of this manuscript. Open water areas that are either sea water, melt lakes on the ice sheet or lakes on land is not possible using simple thresholding in the SAR images as the radar signal interprets each as water.
P21R411-420. Lake growth after temperatures start exceeding 0 has been well studied on the Greenland ice sheet for well over 10 years now. And PDD was used by, e.g. Johansson et al, (2013) to study lake evolution.
Johansson, Jansson and Brown (2013): Spatial and temporal variations in lakes on the Greenland Ice Sheet, J. Hydrology, https://doi.org/10.1016/j.jhydrol.2012.10.045
Presentation
There is a substantial amount of details about chemical and biological importance of these lakes in the introduction. Shorten this to one paragraph, up to 4 sentences and highlight instead how this work fits into lake detection using satellite images, ML/DL of lake detection, or similar. The work should be set into the context of existing science with the topic presented here not in a different scientific field.
P7R145-150. If dual-polarization data is not available, why is it being discussed here where the method is presented? This would then fit better in the introduction or the discussion.
Minor comments
P2R12 what is “reliable” in this context?
P2R16. Why did you choose ice-free areas? And what do you mean with ice-free here, no glacier ice, no inland ice sheet, no sea ice.
P4R63. Rapidly -> change to a more scientific wording.
Section 2.2. The number of optical images are introduced as 79 and then there is the discussion about removing image. It later transpires that 79 images were used. The text must be amended so that it is clear how many images were being used.
P5R103. Specify what “superior in many aspects” means.
P6R113. This is well known remove this reference to fundamental radar knowledge.
P6R117. Define high-resolution
P6R125-127. Why is there a reference attached to one of the weather stations and not the other? Is it possible to give credit to the data providers instead? The text about “temperature” is confusing, is this not actual temperatures but some kind of simulated temperatures or why has “ been used?
Chapter 3. The ground truth should be presented in the data and not as a part of the results in chapter 3. This also goes for parts of chapter 3.2 that should also be moved to the data section.
P11R226-229. Very well known (fundamental radar) remove reference.
Within this manuscript Sentinel-1 has been used, this is essential to call it or make an acronym if it’s preferred to call it Sentinel. This as there are many ESA Sentinel satellites, and there is also the Asian Sentinels.
P12R270. Many methods detect glacier outlines etc. A more thorough method should be able to at least attempt to separate ice (moving materiel) from the more stationary rocks.
P13R297-299. Well known fundamental radar knowledge, remove reference.
P16R348. Remove “
Citation: https://doi.org/10.5194/egusphere-2023-1810-RC2 -
AC1: 'Reply on RC2', Guitao Shi, 15 Feb 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-1810/egusphere-2023-1810-AC1-supplement.pdf
-
AC1: 'Reply on RC2', Guitao Shi, 15 Feb 2024
-
EC1: 'Comment on egusphere-2023-1810', Nicholas Barrand, 21 Feb 2024
I have read through the reviews and responses documents. All points are addressed but it's unclear to what extent and how these will be changed in a revised manuscript. There are lots of 'We will do this...' Please can I ask the authors to make up their promised changes, and then to revise and upload the manuscript and provide a document addressing each of the reviewer's comments, with links in the manuscript as to where changes have been made and to which comments they refer. The revised manuscript needs to clearly denote where revisions have been made (perhaps in blue text). Thanks, Nick Barrand
Citation: https://doi.org/10.5194/egusphere-2023-1810-EC1
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
425 | 176 | 41 | 642 | 53 | 26 | 30 |
- HTML: 425
- PDF: 176
- XML: 41
- Total: 642
- Supplement: 53
- BibTeX: 26
- EndNote: 30
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Anyao Jiang
Xin Meng
Yan Huang
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1972 KB) - Metadata XML
-
Supplement
(31 KB) - BibTeX
- EndNote
- Final revised paper