the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Arctic sea ice radar freeboard retrieval from ERS-2 using altimetry: Toward sea ice thickness observation from 1995 to 2021
Abstract. Sea ice volume significant interannual variability requires long-term series of observations to identify trends in its evolution. Despite improvements in sea ice thickness estimations from altimetry during the past few years thanks to CryoSat-2 and ICESat-2, former ESA radar altimetry missions such as Envisat and especially ERS-1 and ERS-2 have remained under-exploited so far. Although solutions have already been proposed to ensure continuity of measurements between CryoSat-2 and Envisat, there is no time series integrating ERS. The purpose of this study is to extend the Arctic freeboard time series back to 1995. The difficulty to handle ERS measurements comes from a technical issue known as the pulse-blurring effect, altering the radar echos over sea ice and the resulting surface height estimates. Here we present and apply a correction for this pulse-blurring effect. To ensure consistency of the CryoSat-2/Envisat/ERS-2 time series, a multi-parameters neural network-based method to calibrate Envisat against CryoSat-2 and ERS-2 against Envisat is presented. The calibration is trained on the discrepancies observed between the altimeter measurements during the missions-overlap periods and a set of parameters characterizing the sea ice state. Monthly radar freeboards are provided with uncertainty estimations based on a Monte Carlo approach to propagate the uncertainties all along the processing chain, including the neural network. Comparisons of corrected radar freeboards during overlap periods reveal good consistencies between missions, with a mean bias of 3 mm for Envisat/CryoSat-2 and 2 mm for ERS-2/Envisat. The monthly maps obtained from Envisat and ERS-2 are then validated by comparison with several independent data such as airborne, moorings, direct measurements and other altimeter products. Except for two data sets, comparisons lead to correlation ranging from 0.42 to 0.94 for Envisat, and 0.6 to 0.76 for ERS-2. The study finally provides radar freeboard estimation for winters from 1995 to 2021 (from ERS-2 mission to CryoSat-2).
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(6883 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(6883 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-214', Jack Landy, 21 Jul 2022
The authors construct a 25-year record of Arctic sea ice radar freeboard by reconciling the measurements from three radar altimetry missions, one ongoing and two historic. Their primary motivation is to generate the first step towards a long-term sea ice thickness record for the Arctic Ocean. This would be the first observational sea ice thickness record spanning such a long period and would offer valuable comparison to existing proxy sea ice thickness (SIT) records based on ice age and models. In my view, a robust 25+ year time series of Arctic sea ice thickness would represent a major scientific breakthrough with implications for understanding global climate changes in the modern era and validating and improving sea ice models, among other potential applications.
Generally, I find the approach and methods to be scientifically sound. I have some minor comments but nothing that questions the rigour of the generated time series. The validation against existing SIT data from satellites, airborne and in situ sensors is comprehensive and convincing.
Excellent work on a really valuable study – it was a pleasure to read! Feel free to get in touch if you have any questions, Jack Landy
Minor comments/edits:
Line 2. Sea ice volume’s..?
L14-15. I would suggest including other statistics of the variability on the bias within the abstract. Given the ML algorithm aims to remove the bias I would argue the stats on variability are more interesting for the reader.
L28. Technically past radar altimeters have not allowed basin scale, so altimetry doesn’t offer a ‘global approach’ over the long term. But this is nit-picky.
L51. Explain ‘heuristic retracker TFMRA50’.
L65. Check Appendix Table 1. Does this tally?
L103-104. How is it aggregated? Bit vague.
L134-135. Requires citations.
Figure 1. Could you add here a map of the satellite coverage and the locations of different validation datasets? This would be useful for the reader to understand limits of the record and interpret differences to specific validation data.
L166-167. Which version of the IS-1 data was used?
Figure 2. I would suggest to add histograms to one side for each of the three elevation profiles, so it is easier to visualize any differences/biases.
L215. Sure, but how much are they improved quantitatively if we are using Envisat as the reference?
L226-227. Can you add a table of the thresholds after they have been calculated to keep lead/floe proportions the same during overlap periods? This would aid repeatability of the study.
Also is this based on SIC from an external dataset? Otherwise which mission do you use as a reference to calibrate the other to?
Since snagging to off-nadir leads is more likely with the LRM mode missions, is there a chance these missions will include more leads accidentally classified as floes, using this approach for an equal proportion of floe/lead?
L236-237. More information is required on the interpolation method and procedure.
L237-238. Do you discard rFBs above a max distance to the nearest lead? If so what limit do you use?
L250. Can you explain a little more about this constant SLA bias in LRM? Why does it appear and what could be done, in theory, to remove it?
Figure 4. Add the sensing mode to the plot. The CS2 data here is SAR mode right, not calculated from pLRM?
L278. Explain these terms.
L283-284. What does this mean? Retrained again or just some sort of tuning? Might it be very different from the training with 90-10 split?
L301-303. Needs more info. Why do you calculate uncertainty differently between leads and floes? The uncertainty at floes is governed by the variability in height measurements at proximal leads.
What distance is used to calculate an along-track mean elevation? Is the variability in individual floe height obs around this not just a measure of the topography? It will be higher over MYI but does this realistically mean the uncertainty is higher?
L307. How are the uncertainties reduced during gridding? Speckle noise should drop as a function of N observations, but SLA uncertainty should only drop as a function of N tracks (because SSH error is highly correlated along track).
L312. Systematic uncertainty due to roughness is 20-30% of the freeboard as well as of the thickness.
L316-318. Based on the schematic in figure 6 everything you've done seems fine, but it is still confusing to follow all the steps. What are these 'other inputs'? And which variables do you divide by the sqrt of the number of observations vs the sqrt of the number of tracks when gridding?
L325-329. How do you estimate the gaussian noise distribution statistics? is this the sigma = 2*sigma_omega in Figure 6? The output from a monte carlo error budget depends closely on the assumptions taken for the error distributions so this is important.
L340. For which months in Fig 8?
L357. What are these numbers as a % of the mean rFB?
L359. Again what are these as a % of the mean rFB?
L363. Can you do the same for the ERS2-Envisat comparison?
Figure 7b. It looks like you may have some spurious tracks in Hudson Bay, Baffin Bay and Bering Strait that could contaminate the comparisons?
Figure 7 caption. Emphasize the distributions include CS2 data only for the coinciding region south of 81.5N.
L384. ‘static data’?
L392. I think it is reasonable to discount IMBs because they represent only the single floe they are deployed on (usually a thicker floe) and not their surrounding 12x12 km grid cell area. The authors could remove these comparisons so they don’t draw reader’s attention and they come to the wrong conclusions about the satellite data validity; but that is up to the authors.
L397. Could be attributed to, but not definitely.
Figure 9 and elsewhere. Define the acronyms of statistical tests in the caption.
L406-407. Can this say anything about the calibration? Are the BGEP ice conditions more representative of average sea ice conditions in the Arctic and the other ULS datasets more of thin ice conditions?
Was the calibration not slightly overestimating thin ice thickness for Envisat?
L415-416. How do these numbers compare to your estimated uncertainties for the same regions?
L420-422. What are the statistics like for CS2 data processed with this method? You don’t necessarily need to show a plot, but some idea of biases would be useful. Do you also see generally negative biases for CS2? Especially over FYI?
L425-426. Could you try them also with the adapted warren climatology and see if biases get any smaller? Would help to clarify the impact of snow loading.
L427. Is Section 2.3 correct?
L441. It is worth making it a bit clearer on Fig 13 and throughout this section that these volumes miss out everything >81.5N.
Figure 13. I think readers would find it interesting to see more of your rFB dataset. I'd suggest an additional figure showing trends in rFB as a map for the overlap region, highlighting where the trends are significant or not.
Table A1. SAR you mean? or is this actually the CS2 LRM mode? I think SAR was used here right so state SAR parameters?
Citation: https://doi.org/10.5194/egusphere-2022-214-RC1 -
AC1: 'Reply on RC1', Marion Bocquet, 16 Dec 2022
We would like to thank the reviewer for his careful reading of the manuscript, for this positive feedback and for the relevant
and constructive remarks that have helped to improve the quality of the manuscript. In order to fit with your comments, we
have made a revision of the manuscript that should have corrected the textual issues and well improved the readability of the document. We hope that these modifications will meet your requirements.You will find the answers in the attached document.
-
AC1: 'Reply on RC1', Marion Bocquet, 16 Dec 2022
-
CC1: 'Comment on egusphere-2022-214', Robbie Mallett, 29 Aug 2022
I've attached some comments as a supplementary pdf. Congratulations on a nice paper on some really critical historical data.
-
AC4: 'Reply on CC1', Marion Bocquet, 16 Dec 2022
We would like to thank you for the very detailed and relevant comment you provided first as a community comment.
A detailed answer to your comment is provided within the same document as for your comments as a referee. Please see the attached document of your other comment.
Citation: https://doi.org/10.5194/egusphere-2022-214-AC4
-
AC4: 'Reply on CC1', Marion Bocquet, 16 Dec 2022
-
RC2: 'Comment on egusphere-2022-214', Robbie Mallett, 26 Sep 2022
I left a community comment on this manuscript (https://doi.org/10.5194/egusphere-2022-214-CC1) before being nominated as a referee. I have therefore read and considered the manuscript again.
As part of this, I investigated the data that was made available to me as a nominated reviewer. I wanted to see the size of the correction/calibration applied by the neural network presented in this paper. This has led me to question the nature of the ‘correction’ being applied, and whether it is reasonable to present this data product as a series of ‘corrected’ radar freeboard values at all. I would like to review this manuscript again once the queries raised here have been addressed.
I have attached my analysis as a supplementary pdf.
-
AC3: 'Reply on RC2', Marion Bocquet, 16 Dec 2022
We would like to thank the reviewer for his careful reading of the manuscript and for the relevant remarks that have helped to improve the quality of the manuscript.
You will find our answers in the attached document.
This document contains the answers to this comment as well as to your community comment.
-
AC3: 'Reply on RC2', Marion Bocquet, 16 Dec 2022
-
RC3: 'Comment on egusphere-2022-214', Anonymous Referee #3, 10 Nov 2022
First, I would like to express my apologies to the authors for taking this long to provide my review due to personal reasons. Nonetheless, I was asked to still provide it also in the light of the two already published referee comments. This in mind, I will focus on aspects I do not see covered yet or extend on raised issues as I see fit with a focus on the “calibration” using a neural network. I provide general comments first with some additional specific comments at the end.
The authors present in their study their way of generating a new dataset of altimetry-based freeboard data with ERS-2 data incorporated for the first time. This is a great achievement in itself and definitely justifies publication. Furthermore, the authors put substantial effort in validating their results against several different types of validation data. ERS data in general is a great challenge to work with and there is a reason why not many people are actually working on the task to make use of them over sea ice.
However, as also pointed out in the very detailed review by Robbie Mallet, who went to great lengths to analyze the results and underlying data, it appears the chosen methodology does not really work the way the authors or at least any potential reader would expect it. There appears to be strong evidence that the large mix of input data to the neural network along side the ERS freeboard estimates dominate the outcome. Hence, the NN did not learn what was expected but something else. While this is not necessarily bad, it is a fundamental problem of the presented study, as in my opinion, this is can be seen as grist to the mills of all machine learning or artificial intelligence sceptics. It should clearly be stated what the impact of each dataset is on the resulting product or rather that its apparently not the input raw freeboard. Potentially, the product could even e generated without the raw freeboard? This really should be clarified upfront and likely further investigated by the authors before publication.
General comments:
- L257: One could doubt the idea to use this kind of freeboards as an input in the first place. Wouldn’t it make a difference to choose a more appropriate retracker threshold for leads in LRM waveforms like 90/95%? This might not solve the problem with regional patterns but would likely eliminate the negative freeboards and deliver a better initial state.
- On a very general note: What are the improvements over Guerreiro et al (2017)? What justifies the use of a neural network instead of simply extending this methodology? As it had a more direct link to the actual measurements of the instrument? (as suggested also by the authors in L258-260)
- L277: Out of curiosity, did the authors test various setups and this architecture of the NN showed the best results? How was it evaluated and what different setups were used? Things like the number of layers, number of neurons per layer, activation functions etc. come to mind and all the mentioned specifics come without references or justification! For example, there are pretty much no modern studies on ML/AI that do not use some sort of ReLU activation functions, why do the author use a Sigmoid? Some elaboration on this might be informative to the readers as well and also provide a broader background also to non-ML enthusiasts in the sea-ice comunity.
- L279: The authors should clarify hyper parameters to the non-AI/ML expert readers. Without any reference I fear this is a lot to ask from potential readers of a non-AI journal. Additionally, what optimizer did the authors use as this can also have a substantial impact on the training process and the model performance and is totally unmentioned in the current version of the manuscript.
- L280: It is not clear to me how these 5 models are differing from each other? By slightly different choices on the learning rate? Please elaborate!
- L282: Common practice would be a split around 80/20% or 75/25%, how do the authors justify such a small test-set size? This could result in a quite non-representative test dataset in the end.
- While not a native English speaker myself, I further suggest some general English language editing before publication.
Specific Comments:
L118 & 122: the (Lindsay and Schweiger, 2013) reference should not be in parenthesis.
L126: I think these PP thresholds should be mentioned here in a Table or within the text.
L284: This should be ‘the trained NN’ not the ‘the NN trained’.
L286: I might just have missed it (sorry then) but what is the SARM abbreviation?
Figure6: This definitely needs a much larger figure caption!
Citation: https://doi.org/10.5194/egusphere-2022-214-RC3 - AC2: 'Reply on RC3', Marion Bocquet, 16 Dec 2022
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-214', Jack Landy, 21 Jul 2022
The authors construct a 25-year record of Arctic sea ice radar freeboard by reconciling the measurements from three radar altimetry missions, one ongoing and two historic. Their primary motivation is to generate the first step towards a long-term sea ice thickness record for the Arctic Ocean. This would be the first observational sea ice thickness record spanning such a long period and would offer valuable comparison to existing proxy sea ice thickness (SIT) records based on ice age and models. In my view, a robust 25+ year time series of Arctic sea ice thickness would represent a major scientific breakthrough with implications for understanding global climate changes in the modern era and validating and improving sea ice models, among other potential applications.
Generally, I find the approach and methods to be scientifically sound. I have some minor comments but nothing that questions the rigour of the generated time series. The validation against existing SIT data from satellites, airborne and in situ sensors is comprehensive and convincing.
Excellent work on a really valuable study – it was a pleasure to read! Feel free to get in touch if you have any questions, Jack Landy
Minor comments/edits:
Line 2. Sea ice volume’s..?
L14-15. I would suggest including other statistics of the variability on the bias within the abstract. Given the ML algorithm aims to remove the bias I would argue the stats on variability are more interesting for the reader.
L28. Technically past radar altimeters have not allowed basin scale, so altimetry doesn’t offer a ‘global approach’ over the long term. But this is nit-picky.
L51. Explain ‘heuristic retracker TFMRA50’.
L65. Check Appendix Table 1. Does this tally?
L103-104. How is it aggregated? Bit vague.
L134-135. Requires citations.
Figure 1. Could you add here a map of the satellite coverage and the locations of different validation datasets? This would be useful for the reader to understand limits of the record and interpret differences to specific validation data.
L166-167. Which version of the IS-1 data was used?
Figure 2. I would suggest to add histograms to one side for each of the three elevation profiles, so it is easier to visualize any differences/biases.
L215. Sure, but how much are they improved quantitatively if we are using Envisat as the reference?
L226-227. Can you add a table of the thresholds after they have been calculated to keep lead/floe proportions the same during overlap periods? This would aid repeatability of the study.
Also is this based on SIC from an external dataset? Otherwise which mission do you use as a reference to calibrate the other to?
Since snagging to off-nadir leads is more likely with the LRM mode missions, is there a chance these missions will include more leads accidentally classified as floes, using this approach for an equal proportion of floe/lead?
L236-237. More information is required on the interpolation method and procedure.
L237-238. Do you discard rFBs above a max distance to the nearest lead? If so what limit do you use?
L250. Can you explain a little more about this constant SLA bias in LRM? Why does it appear and what could be done, in theory, to remove it?
Figure 4. Add the sensing mode to the plot. The CS2 data here is SAR mode right, not calculated from pLRM?
L278. Explain these terms.
L283-284. What does this mean? Retrained again or just some sort of tuning? Might it be very different from the training with 90-10 split?
L301-303. Needs more info. Why do you calculate uncertainty differently between leads and floes? The uncertainty at floes is governed by the variability in height measurements at proximal leads.
What distance is used to calculate an along-track mean elevation? Is the variability in individual floe height obs around this not just a measure of the topography? It will be higher over MYI but does this realistically mean the uncertainty is higher?
L307. How are the uncertainties reduced during gridding? Speckle noise should drop as a function of N observations, but SLA uncertainty should only drop as a function of N tracks (because SSH error is highly correlated along track).
L312. Systematic uncertainty due to roughness is 20-30% of the freeboard as well as of the thickness.
L316-318. Based on the schematic in figure 6 everything you've done seems fine, but it is still confusing to follow all the steps. What are these 'other inputs'? And which variables do you divide by the sqrt of the number of observations vs the sqrt of the number of tracks when gridding?
L325-329. How do you estimate the gaussian noise distribution statistics? is this the sigma = 2*sigma_omega in Figure 6? The output from a monte carlo error budget depends closely on the assumptions taken for the error distributions so this is important.
L340. For which months in Fig 8?
L357. What are these numbers as a % of the mean rFB?
L359. Again what are these as a % of the mean rFB?
L363. Can you do the same for the ERS2-Envisat comparison?
Figure 7b. It looks like you may have some spurious tracks in Hudson Bay, Baffin Bay and Bering Strait that could contaminate the comparisons?
Figure 7 caption. Emphasize the distributions include CS2 data only for the coinciding region south of 81.5N.
L384. ‘static data’?
L392. I think it is reasonable to discount IMBs because they represent only the single floe they are deployed on (usually a thicker floe) and not their surrounding 12x12 km grid cell area. The authors could remove these comparisons so they don’t draw reader’s attention and they come to the wrong conclusions about the satellite data validity; but that is up to the authors.
L397. Could be attributed to, but not definitely.
Figure 9 and elsewhere. Define the acronyms of statistical tests in the caption.
L406-407. Can this say anything about the calibration? Are the BGEP ice conditions more representative of average sea ice conditions in the Arctic and the other ULS datasets more of thin ice conditions?
Was the calibration not slightly overestimating thin ice thickness for Envisat?
L415-416. How do these numbers compare to your estimated uncertainties for the same regions?
L420-422. What are the statistics like for CS2 data processed with this method? You don’t necessarily need to show a plot, but some idea of biases would be useful. Do you also see generally negative biases for CS2? Especially over FYI?
L425-426. Could you try them also with the adapted warren climatology and see if biases get any smaller? Would help to clarify the impact of snow loading.
L427. Is Section 2.3 correct?
L441. It is worth making it a bit clearer on Fig 13 and throughout this section that these volumes miss out everything >81.5N.
Figure 13. I think readers would find it interesting to see more of your rFB dataset. I'd suggest an additional figure showing trends in rFB as a map for the overlap region, highlighting where the trends are significant or not.
Table A1. SAR you mean? or is this actually the CS2 LRM mode? I think SAR was used here right so state SAR parameters?
Citation: https://doi.org/10.5194/egusphere-2022-214-RC1 -
AC1: 'Reply on RC1', Marion Bocquet, 16 Dec 2022
We would like to thank the reviewer for his careful reading of the manuscript, for this positive feedback and for the relevant
and constructive remarks that have helped to improve the quality of the manuscript. In order to fit with your comments, we
have made a revision of the manuscript that should have corrected the textual issues and well improved the readability of the document. We hope that these modifications will meet your requirements.You will find the answers in the attached document.
-
AC1: 'Reply on RC1', Marion Bocquet, 16 Dec 2022
-
CC1: 'Comment on egusphere-2022-214', Robbie Mallett, 29 Aug 2022
I've attached some comments as a supplementary pdf. Congratulations on a nice paper on some really critical historical data.
-
AC4: 'Reply on CC1', Marion Bocquet, 16 Dec 2022
We would like to thank you for the very detailed and relevant comment you provided first as a community comment.
A detailed answer to your comment is provided within the same document as for your comments as a referee. Please see the attached document of your other comment.
Citation: https://doi.org/10.5194/egusphere-2022-214-AC4
-
AC4: 'Reply on CC1', Marion Bocquet, 16 Dec 2022
-
RC2: 'Comment on egusphere-2022-214', Robbie Mallett, 26 Sep 2022
I left a community comment on this manuscript (https://doi.org/10.5194/egusphere-2022-214-CC1) before being nominated as a referee. I have therefore read and considered the manuscript again.
As part of this, I investigated the data that was made available to me as a nominated reviewer. I wanted to see the size of the correction/calibration applied by the neural network presented in this paper. This has led me to question the nature of the ‘correction’ being applied, and whether it is reasonable to present this data product as a series of ‘corrected’ radar freeboard values at all. I would like to review this manuscript again once the queries raised here have been addressed.
I have attached my analysis as a supplementary pdf.
-
AC3: 'Reply on RC2', Marion Bocquet, 16 Dec 2022
We would like to thank the reviewer for his careful reading of the manuscript and for the relevant remarks that have helped to improve the quality of the manuscript.
You will find our answers in the attached document.
This document contains the answers to this comment as well as to your community comment.
-
AC3: 'Reply on RC2', Marion Bocquet, 16 Dec 2022
-
RC3: 'Comment on egusphere-2022-214', Anonymous Referee #3, 10 Nov 2022
First, I would like to express my apologies to the authors for taking this long to provide my review due to personal reasons. Nonetheless, I was asked to still provide it also in the light of the two already published referee comments. This in mind, I will focus on aspects I do not see covered yet or extend on raised issues as I see fit with a focus on the “calibration” using a neural network. I provide general comments first with some additional specific comments at the end.
The authors present in their study their way of generating a new dataset of altimetry-based freeboard data with ERS-2 data incorporated for the first time. This is a great achievement in itself and definitely justifies publication. Furthermore, the authors put substantial effort in validating their results against several different types of validation data. ERS data in general is a great challenge to work with and there is a reason why not many people are actually working on the task to make use of them over sea ice.
However, as also pointed out in the very detailed review by Robbie Mallet, who went to great lengths to analyze the results and underlying data, it appears the chosen methodology does not really work the way the authors or at least any potential reader would expect it. There appears to be strong evidence that the large mix of input data to the neural network along side the ERS freeboard estimates dominate the outcome. Hence, the NN did not learn what was expected but something else. While this is not necessarily bad, it is a fundamental problem of the presented study, as in my opinion, this is can be seen as grist to the mills of all machine learning or artificial intelligence sceptics. It should clearly be stated what the impact of each dataset is on the resulting product or rather that its apparently not the input raw freeboard. Potentially, the product could even e generated without the raw freeboard? This really should be clarified upfront and likely further investigated by the authors before publication.
General comments:
- L257: One could doubt the idea to use this kind of freeboards as an input in the first place. Wouldn’t it make a difference to choose a more appropriate retracker threshold for leads in LRM waveforms like 90/95%? This might not solve the problem with regional patterns but would likely eliminate the negative freeboards and deliver a better initial state.
- On a very general note: What are the improvements over Guerreiro et al (2017)? What justifies the use of a neural network instead of simply extending this methodology? As it had a more direct link to the actual measurements of the instrument? (as suggested also by the authors in L258-260)
- L277: Out of curiosity, did the authors test various setups and this architecture of the NN showed the best results? How was it evaluated and what different setups were used? Things like the number of layers, number of neurons per layer, activation functions etc. come to mind and all the mentioned specifics come without references or justification! For example, there are pretty much no modern studies on ML/AI that do not use some sort of ReLU activation functions, why do the author use a Sigmoid? Some elaboration on this might be informative to the readers as well and also provide a broader background also to non-ML enthusiasts in the sea-ice comunity.
- L279: The authors should clarify hyper parameters to the non-AI/ML expert readers. Without any reference I fear this is a lot to ask from potential readers of a non-AI journal. Additionally, what optimizer did the authors use as this can also have a substantial impact on the training process and the model performance and is totally unmentioned in the current version of the manuscript.
- L280: It is not clear to me how these 5 models are differing from each other? By slightly different choices on the learning rate? Please elaborate!
- L282: Common practice would be a split around 80/20% or 75/25%, how do the authors justify such a small test-set size? This could result in a quite non-representative test dataset in the end.
- While not a native English speaker myself, I further suggest some general English language editing before publication.
Specific Comments:
L118 & 122: the (Lindsay and Schweiger, 2013) reference should not be in parenthesis.
L126: I think these PP thresholds should be mentioned here in a Table or within the text.
L284: This should be ‘the trained NN’ not the ‘the NN trained’.
L286: I might just have missed it (sorry then) but what is the SARM abbreviation?
Figure6: This definitely needs a much larger figure caption!
Citation: https://doi.org/10.5194/egusphere-2022-214-RC3 - AC2: 'Reply on RC3', Marion Bocquet, 16 Dec 2022
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
862 | 350 | 39 | 1,251 | 19 | 28 |
- HTML: 862
- PDF: 350
- XML: 39
- Total: 1,251
- BibTeX: 19
- EndNote: 28
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
2 citations as recorded by crossref.
Marion Bocquet
Sara Fleury
Fanny Piras
Eero Rinne
Heidi Sallila
Florent Garnier
Frédérique Rémy
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(6883 KB) - Metadata XML