the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Machine learning of Antarctic firn density by combining radiometer and scatterometer remote sensing data
Abstract. Firn density plays a crucial role in assessing the surface mass balance of the Antarctic ice sheet. However, our understanding of the spatial and temporal variations in firn density is limited due to i) spatial and temporal limitations of in situ measurements, ii) potential modelling uncertainties, and iii) lack of firn density products driven by satellite remote sensing data. To address this gap, this paper explores the potential of satellite microwave radiometer (SMISS) and scatterometer (ASCAT) observations for assessing spatial and temporal dynamics of dry firn density over the Antarctic ice sheet. Our analysis demonstrates a clear relation between density anomalies at a depth of 4 cm and fluctuations in satellite observations. However, a linear relationship with individual satellite observations is insufficient to explain the spatial and temporal variation of snow density. Hence, we investigate the potential of a non-linear Random Forest (RF) machine learning approach trained on radiometer and scatterometer data to derive the spatial and temporal variations in dry firn density. In the estimation process, ten years of SSMIS observations (brightness temperature), ASCAT observations (backscatter intensity), and polarisation and frequency ratios derived from SSMIS observations are used as input features to a random forest (RF) regressor. The regressor is first trained on time series of modelled density and satellite observations at randomly sampled pixels, and then applied to estimate densities in dry firn areas across Antarctica. The RF results reveal a strong agreement between the spatial patterns estimated by the RF regressor and the modelled densities. The estimated densities exhibit an error of ± 10 kg m−3 in the interior of the ice sheet and ± 20 kg m−3 towards the ocean. However, the temporal patterns show some discrepancies, as the RF regressor tends to overestimate summer densities, except for high-elevation regions in East Antarctica and specific areas in West Antarctica. These errors may be attributed to underestimations of short-term (or seasonal) variations in the modelled density and the limitation of RF in extrapolating values outside the training data. Overall, our study presents a potential method for estimating unknown Antarctic firn densities using known densities and satellite parameters.
- Preprint
(14625 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2023-1556', Anonymous Referee #1, 25 Sep 2023
This paper details a study using machine learning (ML) to examine Antarctic firn density. The paper is interesting and needs some further revisions before it is suitable for publication. I have put some suggestions and questions below.
Major comments:
Introduction, I suggest you start bigger, why does Antarctica ice sheets matter to the globe? Also, I think you need to define firn for folks who are not clear on what it is.
On line 142, you say that the firn model has a resolution of 27 km – is that sufficient to capture the firn variations? This is quite coarse, in my opinion. Is this 27 km by 27 km grid cells? I think this needs to be stated more clearly.
I think you need at least one study site figure that has all of the locations you refer to in the paper on one introductory map. See my comment from Line 152, for example.
Overall, the study design seems confusing. You take the time to cluster the data, but then you do not use it for the analysis, really. Why would you not use that to identify the dry-snow zones, and then perhaps build multiple RF models to see what zone could be best captured? This seems like an interesting approach to take but was not used. I think that this would also eliminate the need to only model the non-wet areas if you simply remove the regions that do poorly in satellite observations.
I do not understand why you didn’t use the RF and importances to reduce your model variables. As you show in Figure 5, it looks like these anomalies are not adding much to the RF model. I think you might be able to remove them in the analysis.Did you consider other types of ML models, or did you just decide to use RF approaches? Why not consider other approaches?
On lines 325, you say “that do not correspond to changes in densities in dry-firn regions?. This line has me wondering about the objective of your work. Are you interested in the firn estimation or are you interested in the change in firn over time? Is the RF model developed for this? Or, are the clusters? You say in the beginning of the paper (Line 71) that the objective of this paper is to “assess the feasibility of combining radiometer and scatterometer remote sensing data to assess Antarctica-wide dry firn density.” But, you also say on Line 220 “As our goal is to relate the satellite time series to assess spatio-temporal variations in firn density, we adopt an alternative approach that uses the output of IMAU-FDM as training data instead of relying on in situ data.”. What is the objective of this work? If it is average firn, then you can develop your model in one way, but if it is not, then you should develop it in another.
Minor Comments:
Line 17, short-term (or seasonal) variations, is it both or do you just mean seasonal?
Line 30, This statement needs a reference.
Line 63 However, the precise mechanisms underlying the interaction between firn densities and satellite observations cannot always be fully understood (Champollion et al., 2013; Fraser et al., 2016; Rizzoli et al., 2017). What do you mean by this? Interaction implies they are interacting, which they are not…
Line 67, “to other areas or time periods therefore requires further assessment (Tran et al., 2008;
Fraser et al., 2016; Nicolas et al., 2017; Rizzoli et al., 2017)”. What did they find? Was it successful, i.e, did it work?
Generally, italicize In situ.
On line 70, you talk about calibration. You did not mention calibration previously, and it is unclear what this is referring to. Models? The satellites? Fusion methods? I think this needs to be tied to modeling and why calibration is needed. Otherwise it seems to be coming in the text out of the blue.
Line 72, you talk here at three experiments, did you compare /use the observations in situ ever? It seems like the SUMup is not used (or mentioned) in any one of the experiments. I think if you are going to mention SUMup, you need to say where it was applied in the experiments.
Line 132, “outputs of the regional atmospheric climate model RACMO2.3p2” These scales seem really different… What resolution is the model run at?
Line 140 – move these two sentences up to say this earlier (perhaps line 131), that will assist with my previous comment. The first sentence of this paragraph could be combined with the previous one.
Line 132, RACMO2.3p2 – define?
Generally, through the text, you refer to “the model output”, or “models”. As you have multiple models, I suggest calling the models by their names, or ensuring they are referenced clearly to distinguish the model.
Line 135, we focus on the density of the… How many layers are there in this model in total?
Line 142, the firn data are reprojected – this is modeled data, correct? I think you want to make sure to differentiate the model from the observations.
Line 138, “…have been acquired at approximately this depth…” Why? This seems kind of arbitrary. Also, 4 cm seems very shallow for firn. Is this because it is in Antarctica?
Line 145, Surface Mass Balance and Snow on Sea Ice Working Group (SUMup) dataset. You have already used this acronym, define it earlier.
Line 146, “at the smallest mid-point depths” More clarity please, what is ‘small’ and what is the mid-point of?
Line 151- For each date of measurement at each location, talk about the locations and dates first… What locations are these dates at?
Line 152, Dome C, where is this? Map?
Line 159, By incorporating this information… I don’t understand how the ERA5 data was used and why it was used. This needs to be better explained.
In Section 2.5, are you talking about comparing model and observations (at points?).. and the satellites? I think this needs to be thought through and justified in the text. Comparing satellite and model data with single point measurements is tricky. There are a lot of references out there about how to do this, particularly in the climate modeling realm. I suggest the authors read some of these papers and at least add a discussion in the text around this.
Sometimes you say “firn data” and other times you talk dry firn. Should this be defined? Can you make sure you are being consistent through the text?
Line 168m dry-snow zones, what are these?
Line 184, model training procedure. Which model are you talking about?
Line 190-195, this is not very clear. Can you rephrase?
Line 196, variations of other properties. What other properties?
Line 196, In addition, although may not have such large dependence on firn temperature as TB, we use its time series anomalies to maintain consistency with TB. This is unclear, can you rephrase?
Line 204, ‘distance’ between pixels. Make sure to clarify that this is not spatial distance.
Line 205, between the parameters of different pixels. What parameters?
Line 210, different satellite parameters, together with the IMAU-FDM density for each cluster. I do not understand what this means. What parameters? I think you need to make a table of parameters.
Line 217, RF regressor. Add more references since this approach is now widely used in climate science, for snow distribution mapping and other work.
Line 225, for pattern recognition in noisy datasets. Add a reference here.
Line 226, reduce the variance of the model and prevent overfitting. Add a reference.
Did you consider other ML approaches?Line 230-onward. Are you building a RF for each time step to estimate the timeseries at each grid cell? How many samples in total went into the model? Line 244 talked about pixels, and the resulting sample size. Is this the total number of samples? Do you think this is enough to train a RF, especially considering the results?
Table 1. These are all hyperparameters, do you call them parameters (which you use other times in the paper for the actual input parameters to the model, which in itself is confusing).
Line 258. Why did you only use Gini importance vs other importance metrics? Did this choices affect any of your importance rankings?
Line 264, by means of the RMSE. Do you mean averages or do you mean via using RMSEs?
Line 269, this is the first time you refer to Figures… why do it here and not in the rest of the methods?
Line 277, ‘cluster Firn 5’, you have not introduced the cluster results yet so we do not know what these are.
Figure 1, can you tell us which ones are from the satellites parameters and which ones are from the model? Again, a table might help with this.
Line 283, especially at the location of Dome C, again, need to show on the maps or include a figure.
Line 287, There are a lot of other good reasons why the RF should be used, I do not think that this is the strongest one.
Section 4.2 Do you think that you could produce different RF models for each cluster, perhaps? I think this would be very interesting to understand the difference between the performance of each of these models. For instance, if the dry firn can be modeled with lower RMSE /error than some of the other clusters. Honestly, I am still unclear if you did it this way or not.
Figure 4 and Figure 5a
These figures are making me wonder what it is that you are trying to do. For instance, are you trying to estimate the time series of the seasonality and variability you see in Figure 4 with the RF? What is the RF estimating, exactly…? You say “firn densities based on satellite parameters” and you talk about a time series, but I am wondering how you are doing this. What is X in Equation 4, actually? I do not know if this is ever said.
Figure 5b, are all these parameters standardized? Is the importance based on the standardized inputs? I just wonder because the anomalies appear to be the least important, which makes me wonder if perhaps the other parameters are not. Again, if these are not contributing much to the model, did you play around with them being removed? Does the model improve with fewer parameters? Are there any strong correlations between these parameters at all? How are they related or not related to each other?
Line 301, The differences between these clusters mainly arise from deviations in TBanom and, to a lesser extent, 0anom. What is different about them?
Line 298, If 1-4 are the basically the same, why are they not being treated as a single cluster?
Line 303, Are the melt events shown /evidenced in time in the region? Can you talk about this a little bit? You refer to a paper, but don’t go into detail otherwise.
Line 305, Can you describe why how density would change under these melt events, and why? You do not give much background on that.
Line 306, Firn 5, where the melt event of 2016 shows a prolonged effect on the anom time series due to the formation of a sub-surface refrozen high-density layer in IMAU-FDM. Again, what I the implications for this, and what does it mean for firn?
Figure 6. Again, this figure only shows results as temporal averages. How did the time series of the RF do?
Line 383, It is important to note that the wet firn clusters are not used in the following RF steps due to the complex impact of the melt–refreeze cycle on satellite observations. Again, I am thinking that the RF and this cluster analysis is not related.
Line 317, Exhibiting a linear relationship between predictors and the predicted variable – predictand? Saying it this way is confusing.
Figure 5, add units.
Figure 5a, Why do no values exceed this amount? I wonder if perhaps your training data set somehow selects lower firn values… are you randomizing between your training and test sets?
Figure 6 shows that the model is basically as good as the RF (if not better at anything other than the mean). So, why do you need an RF model in this case? How difficult is the model to set up and apply? Again, is there a good reason for the RF here if it doesn’t perform that well, is not finer in scale, or it doesn’t really do that well except on average?
Figure 6d, can you show which is which? Use different symbols instead of colors? They are difficult to differentiate. This intercomparison with observations is likely very challenging to achieve (which I think is what you are attempting to do). I might suggest some sort of spatial upscaling for single point /insitu observations.
Figure 8 illustrates how poor the RF is for a time series. But, I am unclear if you are doing this in the right way. Clarity of methods is required.
Citation: https://doi.org/10.5194/egusphere-2023-1556-RC1 - AC2: 'Reply on RC1', Weiran Li, 25 Nov 2023
-
RC2: 'Comment on egusphere-2023-1556', Emanuele Santi, 05 Oct 2023
The subject of this manuscript is of definite interest for the scientific community. Introduction correctly frames this study in the existing literature, language is clear, and thread deploys smoothly. Innovation with respect to other studies should be however better pointed out and description should be improved in some respects, as well as the presentation of the results. Beside this, the paper suffers from some lacks in the microwave background and I’m suspecting two conceptual issues: the first deals with the attempt to retrieve the density for the 4 cm top layer, which should be quite transparent at the considered MW frequencies in dry conditions. The second concern is about merging direct satellite measurements and derived indices in the RF inputs: based on the information theory, the indices should not bring any additional information independent of the Tb from which they have been computed, so, also based on my experience, these indices should negligibly affect the results.
Detailed comments:
Introduction.
- The introduction contains a review of the state of the art more than enough to frame this paper. I would only suggest clarifying the aspects related to different spatial resolution, coverage and revisiting when mentioning active and passive MW.
Section 2
Section 2.1.
- Equation 1 and 2 are properly referred to the original publications, however a short sentence about the physical principles behind would be useful for the reader.
- Line 94-95. The dramatic change in emission mechanism due to the presence of liquid water within the ice sheet might be commented, although this point is mentioned later in section 3.2. Same applies to the scattering in section 2.2.
Section 2.2.
- the linear correction for local incidence (LIA) sounds me a bit odd. LIA should be already accounted for when computing NRCS to extract the backscattering (σ°). In any case the backscattering dependence on LIA is not linear at all. Finally, as far as I understand from pag. 5 line 125, at the end you did not use data corrected with eq. 3. Could you further clarify?
- The spatial and temporal co-registration between ASCAT and SSMIS should be better described, this could lead to error and artifacts depending on the processing you applied. At the end, how many co-located Tb and σ° you obtained? It is an important information for better understanding the RF implementation, although something is addressed later.
Section 2.3
- line 135 – 138. As stated in the general comments, the attempt to retrieve density at 4 cm raises a conceptual issue. The top 4 cm layer should be almost transparent not only at C- band but also at Ka band in case of dry firn. I’m wondering if you are obtaining results based on indirect correlation of the top layer density with deeper layers to which microwaves are instead sensitive. No wonders if RF achieves successful retrievals: machine learning can exploit almost any kind of input/output relation, but the risk of finding out something based on apparent relationships is always around the corner. If used as “black boxes”, ML could potentially relate newborns in China and weather in USA, but which is the utility? I believe a robust physical justification is needed.
- Line 138 – 140. The sentence is unclear to me, could you rephrase please. Where was density at 1m depth used later?
Section 3
Section 3.2.
- Is Tb Ratio the same of eq. 1? If so, no need to introduce it again with reference.
- In my understanding, volume decorrelation was not introduced before. The cited work by Rizzoli is using X band SAR, it is not clear if this finding is also valid for radiometric measurements (scattering and emission are complimentary each other)
- Line 195 – 199. The normalization by firn temperature is embedded in both parameters you defined in eq. 1 and 2. Which is therefore the reason for removing the average seasonal Tb signal? And which the one for doing the same with backscattering that is almost insensitive to temperature? Moreover, machine learning techniques as RF can cope with redundant, noisy, and biased data, so dealing with timeseries of measurements or their anomalies should not change much the results. Finally, there is also a concern in merging Tb with their ratios that is commented below.
- Lines 201 – 211. The clustering algorithm should be better explained maybe with a supporting figure/diagram. I don’t believe a reader unfamiliar with Ward algorithm can understand this section.
Section 3.3
- Lines 231 – 246. With “sample” do you refer to the set of temporally and spatially coregistered SSMIS and ASCAT measurements for the given pixel? In my understanding, for both subsets I and II you selected randomly 100 pixels from the 7 clusters over Antarctica described in section 3.2 (that is spatial, 25 km resolution each pixel) and you considered the timeseries of satellite measurements (that is temporal, approx. 1 set of SSMIS + ASCAT measurements per pixel day per 10 years). At the end you should have used 365300 sets for training and the same data amount for testing. In other words, you considered about 125000 Km2 for training and testing and applied the trained RF on the remaining ≃14000000 Km2 of Antarctic surface, which is notable. Maybe some more information could be provided…
- Equation 4. The proposed input combination raises another concern: from the information theory, the Tb ratios do not bring to the RF additional information independent of the Tb from which they have been computed, therefore (this is also my personal experience) the results should not be affected by these inputs (or conversely by Tb if you use the ratios). Clarification is needed.
- Line 258. Gini importance should be better referred and briefly commented. Which is the difference with e.g., predictor importance proposed by Breiman?
Section 4
- Section 4.1. Following the comment above, this is the core of my concerns: the scarce correlation with density at 4 cm could be depending on the microwaves’ scarce sensitivity to such shallow depth. Also, the reverse correlation along the coasts should be depending on melting not entirely removed that occurs more frequently than in the central part of Antarctica. Again, the physics behind should be analysed.
- Figure 1. Although referred to in section 4.1, I find this figure poorly informative. My suggestion is to remove or replace with something more meaningful.
- Figure 2. Did you evaluate the correlation with density at 1 m? At the end which was the role of this parameter in your study?
- Figure 4. The plots in the figure are quite small and difficult to read. I would suggest revising.
- Figure 5 left: the scatterplot should refer to the test results (i.e. those obtained on subset II), not to the training results (Subset I). Usually, retrieval scatterplots show the estimated vs. target, not vice-versa. The plot or caption should also cite the statistics and total data amount. Finally, the R value seems even worse than the one of direct correlation with Tb Ku and Ka in figure 2 for most of the pixels. Isn’t it? Which is the explanation?
- Figure 6 with doubled colorbar is difficult to interpret (especially figure 6d). I would suggest revising.
- Figure 7 why do not also add the Correlation/Determination coefficient maps as those in figure 2? In my view this is more informative than e.g. the 10- years averaged maps of figure 6.
Citation: https://doi.org/10.5194/egusphere-2023-1556-RC2 - AC1: 'Reply on RC2', Weiran Li, 25 Nov 2023
Status: closed
-
RC1: 'Comment on egusphere-2023-1556', Anonymous Referee #1, 25 Sep 2023
This paper details a study using machine learning (ML) to examine Antarctic firn density. The paper is interesting and needs some further revisions before it is suitable for publication. I have put some suggestions and questions below.
Major comments:
Introduction, I suggest you start bigger, why does Antarctica ice sheets matter to the globe? Also, I think you need to define firn for folks who are not clear on what it is.
On line 142, you say that the firn model has a resolution of 27 km – is that sufficient to capture the firn variations? This is quite coarse, in my opinion. Is this 27 km by 27 km grid cells? I think this needs to be stated more clearly.
I think you need at least one study site figure that has all of the locations you refer to in the paper on one introductory map. See my comment from Line 152, for example.
Overall, the study design seems confusing. You take the time to cluster the data, but then you do not use it for the analysis, really. Why would you not use that to identify the dry-snow zones, and then perhaps build multiple RF models to see what zone could be best captured? This seems like an interesting approach to take but was not used. I think that this would also eliminate the need to only model the non-wet areas if you simply remove the regions that do poorly in satellite observations.
I do not understand why you didn’t use the RF and importances to reduce your model variables. As you show in Figure 5, it looks like these anomalies are not adding much to the RF model. I think you might be able to remove them in the analysis.Did you consider other types of ML models, or did you just decide to use RF approaches? Why not consider other approaches?
On lines 325, you say “that do not correspond to changes in densities in dry-firn regions?. This line has me wondering about the objective of your work. Are you interested in the firn estimation or are you interested in the change in firn over time? Is the RF model developed for this? Or, are the clusters? You say in the beginning of the paper (Line 71) that the objective of this paper is to “assess the feasibility of combining radiometer and scatterometer remote sensing data to assess Antarctica-wide dry firn density.” But, you also say on Line 220 “As our goal is to relate the satellite time series to assess spatio-temporal variations in firn density, we adopt an alternative approach that uses the output of IMAU-FDM as training data instead of relying on in situ data.”. What is the objective of this work? If it is average firn, then you can develop your model in one way, but if it is not, then you should develop it in another.
Minor Comments:
Line 17, short-term (or seasonal) variations, is it both or do you just mean seasonal?
Line 30, This statement needs a reference.
Line 63 However, the precise mechanisms underlying the interaction between firn densities and satellite observations cannot always be fully understood (Champollion et al., 2013; Fraser et al., 2016; Rizzoli et al., 2017). What do you mean by this? Interaction implies they are interacting, which they are not…
Line 67, “to other areas or time periods therefore requires further assessment (Tran et al., 2008;
Fraser et al., 2016; Nicolas et al., 2017; Rizzoli et al., 2017)”. What did they find? Was it successful, i.e, did it work?
Generally, italicize In situ.
On line 70, you talk about calibration. You did not mention calibration previously, and it is unclear what this is referring to. Models? The satellites? Fusion methods? I think this needs to be tied to modeling and why calibration is needed. Otherwise it seems to be coming in the text out of the blue.
Line 72, you talk here at three experiments, did you compare /use the observations in situ ever? It seems like the SUMup is not used (or mentioned) in any one of the experiments. I think if you are going to mention SUMup, you need to say where it was applied in the experiments.
Line 132, “outputs of the regional atmospheric climate model RACMO2.3p2” These scales seem really different… What resolution is the model run at?
Line 140 – move these two sentences up to say this earlier (perhaps line 131), that will assist with my previous comment. The first sentence of this paragraph could be combined with the previous one.
Line 132, RACMO2.3p2 – define?
Generally, through the text, you refer to “the model output”, or “models”. As you have multiple models, I suggest calling the models by their names, or ensuring they are referenced clearly to distinguish the model.
Line 135, we focus on the density of the… How many layers are there in this model in total?
Line 142, the firn data are reprojected – this is modeled data, correct? I think you want to make sure to differentiate the model from the observations.
Line 138, “…have been acquired at approximately this depth…” Why? This seems kind of arbitrary. Also, 4 cm seems very shallow for firn. Is this because it is in Antarctica?
Line 145, Surface Mass Balance and Snow on Sea Ice Working Group (SUMup) dataset. You have already used this acronym, define it earlier.
Line 146, “at the smallest mid-point depths” More clarity please, what is ‘small’ and what is the mid-point of?
Line 151- For each date of measurement at each location, talk about the locations and dates first… What locations are these dates at?
Line 152, Dome C, where is this? Map?
Line 159, By incorporating this information… I don’t understand how the ERA5 data was used and why it was used. This needs to be better explained.
In Section 2.5, are you talking about comparing model and observations (at points?).. and the satellites? I think this needs to be thought through and justified in the text. Comparing satellite and model data with single point measurements is tricky. There are a lot of references out there about how to do this, particularly in the climate modeling realm. I suggest the authors read some of these papers and at least add a discussion in the text around this.
Sometimes you say “firn data” and other times you talk dry firn. Should this be defined? Can you make sure you are being consistent through the text?
Line 168m dry-snow zones, what are these?
Line 184, model training procedure. Which model are you talking about?
Line 190-195, this is not very clear. Can you rephrase?
Line 196, variations of other properties. What other properties?
Line 196, In addition, although may not have such large dependence on firn temperature as TB, we use its time series anomalies to maintain consistency with TB. This is unclear, can you rephrase?
Line 204, ‘distance’ between pixels. Make sure to clarify that this is not spatial distance.
Line 205, between the parameters of different pixels. What parameters?
Line 210, different satellite parameters, together with the IMAU-FDM density for each cluster. I do not understand what this means. What parameters? I think you need to make a table of parameters.
Line 217, RF regressor. Add more references since this approach is now widely used in climate science, for snow distribution mapping and other work.
Line 225, for pattern recognition in noisy datasets. Add a reference here.
Line 226, reduce the variance of the model and prevent overfitting. Add a reference.
Did you consider other ML approaches?Line 230-onward. Are you building a RF for each time step to estimate the timeseries at each grid cell? How many samples in total went into the model? Line 244 talked about pixels, and the resulting sample size. Is this the total number of samples? Do you think this is enough to train a RF, especially considering the results?
Table 1. These are all hyperparameters, do you call them parameters (which you use other times in the paper for the actual input parameters to the model, which in itself is confusing).
Line 258. Why did you only use Gini importance vs other importance metrics? Did this choices affect any of your importance rankings?
Line 264, by means of the RMSE. Do you mean averages or do you mean via using RMSEs?
Line 269, this is the first time you refer to Figures… why do it here and not in the rest of the methods?
Line 277, ‘cluster Firn 5’, you have not introduced the cluster results yet so we do not know what these are.
Figure 1, can you tell us which ones are from the satellites parameters and which ones are from the model? Again, a table might help with this.
Line 283, especially at the location of Dome C, again, need to show on the maps or include a figure.
Line 287, There are a lot of other good reasons why the RF should be used, I do not think that this is the strongest one.
Section 4.2 Do you think that you could produce different RF models for each cluster, perhaps? I think this would be very interesting to understand the difference between the performance of each of these models. For instance, if the dry firn can be modeled with lower RMSE /error than some of the other clusters. Honestly, I am still unclear if you did it this way or not.
Figure 4 and Figure 5a
These figures are making me wonder what it is that you are trying to do. For instance, are you trying to estimate the time series of the seasonality and variability you see in Figure 4 with the RF? What is the RF estimating, exactly…? You say “firn densities based on satellite parameters” and you talk about a time series, but I am wondering how you are doing this. What is X in Equation 4, actually? I do not know if this is ever said.
Figure 5b, are all these parameters standardized? Is the importance based on the standardized inputs? I just wonder because the anomalies appear to be the least important, which makes me wonder if perhaps the other parameters are not. Again, if these are not contributing much to the model, did you play around with them being removed? Does the model improve with fewer parameters? Are there any strong correlations between these parameters at all? How are they related or not related to each other?
Line 301, The differences between these clusters mainly arise from deviations in TBanom and, to a lesser extent, 0anom. What is different about them?
Line 298, If 1-4 are the basically the same, why are they not being treated as a single cluster?
Line 303, Are the melt events shown /evidenced in time in the region? Can you talk about this a little bit? You refer to a paper, but don’t go into detail otherwise.
Line 305, Can you describe why how density would change under these melt events, and why? You do not give much background on that.
Line 306, Firn 5, where the melt event of 2016 shows a prolonged effect on the anom time series due to the formation of a sub-surface refrozen high-density layer in IMAU-FDM. Again, what I the implications for this, and what does it mean for firn?
Figure 6. Again, this figure only shows results as temporal averages. How did the time series of the RF do?
Line 383, It is important to note that the wet firn clusters are not used in the following RF steps due to the complex impact of the melt–refreeze cycle on satellite observations. Again, I am thinking that the RF and this cluster analysis is not related.
Line 317, Exhibiting a linear relationship between predictors and the predicted variable – predictand? Saying it this way is confusing.
Figure 5, add units.
Figure 5a, Why do no values exceed this amount? I wonder if perhaps your training data set somehow selects lower firn values… are you randomizing between your training and test sets?
Figure 6 shows that the model is basically as good as the RF (if not better at anything other than the mean). So, why do you need an RF model in this case? How difficult is the model to set up and apply? Again, is there a good reason for the RF here if it doesn’t perform that well, is not finer in scale, or it doesn’t really do that well except on average?
Figure 6d, can you show which is which? Use different symbols instead of colors? They are difficult to differentiate. This intercomparison with observations is likely very challenging to achieve (which I think is what you are attempting to do). I might suggest some sort of spatial upscaling for single point /insitu observations.
Figure 8 illustrates how poor the RF is for a time series. But, I am unclear if you are doing this in the right way. Clarity of methods is required.
Citation: https://doi.org/10.5194/egusphere-2023-1556-RC1 - AC2: 'Reply on RC1', Weiran Li, 25 Nov 2023
-
RC2: 'Comment on egusphere-2023-1556', Emanuele Santi, 05 Oct 2023
The subject of this manuscript is of definite interest for the scientific community. Introduction correctly frames this study in the existing literature, language is clear, and thread deploys smoothly. Innovation with respect to other studies should be however better pointed out and description should be improved in some respects, as well as the presentation of the results. Beside this, the paper suffers from some lacks in the microwave background and I’m suspecting two conceptual issues: the first deals with the attempt to retrieve the density for the 4 cm top layer, which should be quite transparent at the considered MW frequencies in dry conditions. The second concern is about merging direct satellite measurements and derived indices in the RF inputs: based on the information theory, the indices should not bring any additional information independent of the Tb from which they have been computed, so, also based on my experience, these indices should negligibly affect the results.
Detailed comments:
Introduction.
- The introduction contains a review of the state of the art more than enough to frame this paper. I would only suggest clarifying the aspects related to different spatial resolution, coverage and revisiting when mentioning active and passive MW.
Section 2
Section 2.1.
- Equation 1 and 2 are properly referred to the original publications, however a short sentence about the physical principles behind would be useful for the reader.
- Line 94-95. The dramatic change in emission mechanism due to the presence of liquid water within the ice sheet might be commented, although this point is mentioned later in section 3.2. Same applies to the scattering in section 2.2.
Section 2.2.
- the linear correction for local incidence (LIA) sounds me a bit odd. LIA should be already accounted for when computing NRCS to extract the backscattering (σ°). In any case the backscattering dependence on LIA is not linear at all. Finally, as far as I understand from pag. 5 line 125, at the end you did not use data corrected with eq. 3. Could you further clarify?
- The spatial and temporal co-registration between ASCAT and SSMIS should be better described, this could lead to error and artifacts depending on the processing you applied. At the end, how many co-located Tb and σ° you obtained? It is an important information for better understanding the RF implementation, although something is addressed later.
Section 2.3
- line 135 – 138. As stated in the general comments, the attempt to retrieve density at 4 cm raises a conceptual issue. The top 4 cm layer should be almost transparent not only at C- band but also at Ka band in case of dry firn. I’m wondering if you are obtaining results based on indirect correlation of the top layer density with deeper layers to which microwaves are instead sensitive. No wonders if RF achieves successful retrievals: machine learning can exploit almost any kind of input/output relation, but the risk of finding out something based on apparent relationships is always around the corner. If used as “black boxes”, ML could potentially relate newborns in China and weather in USA, but which is the utility? I believe a robust physical justification is needed.
- Line 138 – 140. The sentence is unclear to me, could you rephrase please. Where was density at 1m depth used later?
Section 3
Section 3.2.
- Is Tb Ratio the same of eq. 1? If so, no need to introduce it again with reference.
- In my understanding, volume decorrelation was not introduced before. The cited work by Rizzoli is using X band SAR, it is not clear if this finding is also valid for radiometric measurements (scattering and emission are complimentary each other)
- Line 195 – 199. The normalization by firn temperature is embedded in both parameters you defined in eq. 1 and 2. Which is therefore the reason for removing the average seasonal Tb signal? And which the one for doing the same with backscattering that is almost insensitive to temperature? Moreover, machine learning techniques as RF can cope with redundant, noisy, and biased data, so dealing with timeseries of measurements or their anomalies should not change much the results. Finally, there is also a concern in merging Tb with their ratios that is commented below.
- Lines 201 – 211. The clustering algorithm should be better explained maybe with a supporting figure/diagram. I don’t believe a reader unfamiliar with Ward algorithm can understand this section.
Section 3.3
- Lines 231 – 246. With “sample” do you refer to the set of temporally and spatially coregistered SSMIS and ASCAT measurements for the given pixel? In my understanding, for both subsets I and II you selected randomly 100 pixels from the 7 clusters over Antarctica described in section 3.2 (that is spatial, 25 km resolution each pixel) and you considered the timeseries of satellite measurements (that is temporal, approx. 1 set of SSMIS + ASCAT measurements per pixel day per 10 years). At the end you should have used 365300 sets for training and the same data amount for testing. In other words, you considered about 125000 Km2 for training and testing and applied the trained RF on the remaining ≃14000000 Km2 of Antarctic surface, which is notable. Maybe some more information could be provided…
- Equation 4. The proposed input combination raises another concern: from the information theory, the Tb ratios do not bring to the RF additional information independent of the Tb from which they have been computed, therefore (this is also my personal experience) the results should not be affected by these inputs (or conversely by Tb if you use the ratios). Clarification is needed.
- Line 258. Gini importance should be better referred and briefly commented. Which is the difference with e.g., predictor importance proposed by Breiman?
Section 4
- Section 4.1. Following the comment above, this is the core of my concerns: the scarce correlation with density at 4 cm could be depending on the microwaves’ scarce sensitivity to such shallow depth. Also, the reverse correlation along the coasts should be depending on melting not entirely removed that occurs more frequently than in the central part of Antarctica. Again, the physics behind should be analysed.
- Figure 1. Although referred to in section 4.1, I find this figure poorly informative. My suggestion is to remove or replace with something more meaningful.
- Figure 2. Did you evaluate the correlation with density at 1 m? At the end which was the role of this parameter in your study?
- Figure 4. The plots in the figure are quite small and difficult to read. I would suggest revising.
- Figure 5 left: the scatterplot should refer to the test results (i.e. those obtained on subset II), not to the training results (Subset I). Usually, retrieval scatterplots show the estimated vs. target, not vice-versa. The plot or caption should also cite the statistics and total data amount. Finally, the R value seems even worse than the one of direct correlation with Tb Ku and Ka in figure 2 for most of the pixels. Isn’t it? Which is the explanation?
- Figure 6 with doubled colorbar is difficult to interpret (especially figure 6d). I would suggest revising.
- Figure 7 why do not also add the Correlation/Determination coefficient maps as those in figure 2? In my view this is more informative than e.g. the 10- years averaged maps of figure 6.
Citation: https://doi.org/10.5194/egusphere-2023-1556-RC2 - AC1: 'Reply on RC2', Weiran Li, 25 Nov 2023
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
468 | 171 | 35 | 674 | 32 | 29 |
- HTML: 468
- PDF: 171
- XML: 35
- Total: 674
- BibTeX: 32
- EndNote: 29
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1