the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Bounded and categorized: targeting data assimilation for sea ice fractional coverage and non-negative quantities in a single column multi-category sea ice model
Abstract. A rigorous exploration of the sea ice data assimilation (DA) problem using a framework specifically developed for rapid, interpretable hypothesis testing is presented. In many applications, DA is implemented to constrain a modeled estimate of a state with observations. The sea ice DA application is complicated by the wide range of spatio-temporal scales over which key sea ice variables evolve, a variety of physical bounds on those variables, and the particular construction of modern complex sea ice models. By coupling a single-column sea ice model (Icepack) to the Data Assimilation Research Testbed (DART), the grid-cell response of a complex sea ice model is explored with a range of ensemble Kalman DA methods designed to address the aforementioned complications. The impact on the modeled ice-thickness distribution and the bounded nature of both state and prognostic variables in the sea ice model are of particular interest, as these problems are under-examined. Explicitly respecting boundedness has little effect in the winter months, but correctly accounts for the bounded nature of the observations, particularly in the summer months when prescribed SIC error is large. Assimilating observations representing each of the individual modeled sea ice thickness categories consistently improves the analyses across multiple diagnostic variables and sea ice mean states. These results elucidate many of the positive and negative results of previous sea ice data assimilation studies, highlight the many counter-intuitive aspects of this particular data assimilation application, and motivate better future sea ice analysis products.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(5354 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5354 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2016', Anonymous Referee #1, 25 Oct 2023
Overall comments:
“Bounded and categorized: targeting data assimilation for sea ice fractional coverage and non-negative quantities in a single column multi-category sea ice model” tests two data assimilation approaches on single column sea ice model (Icepack, the column physics of CICE) in a perfect-model framework. Assimilating per-category quantities (ice area and volume) is found to substantially improve the skill of the modeling system to be on par with assimilating sea ice thickness and superior to assimilating concentration: which, in the standard, unbounded approach actually performs worse than conducting no assimilation. Bounded data assimilation only shows benefits compared to unbounded when assimilating concentration, and even then, the results are no better than conducting no assimilation. For other parameters bounded assimilation does not yield a net improvement, because the ‘unbounded’ cases are effectively bounded in post-processing. The results, especially the per-category ice area findings, are of broad interest to the sea ice community and merit publication in The Cryosphere. However, there are scientific, technical, and communication issues that need to be addressed before publication.
- Scientific: the primary scientific issue that needs to be addressed is that the “observational uncertainties” applied to ice concentration, thickness, and category variables need to be scientifically justified for the findings to be meaningful. It would maximize the impacts of the work if the justification were based on our real or anticipated observational capabilities (e.g., what are the uncertainties in satellite-derived sea ice concentration). However, if this is not feasible, the rationale for the prescribed uncertainties needs to be clearly explained in the text.
- Scientific: a minor point, the Discussion suggests that the findings are valid in a seasonal (first-year) ice regime, but no simulations from a first year ice regime are presented. I recommend limiting the Discussion to what is clearly supported by the presented results (i.e., not making claims about FYI). Alternatively, if the authors choose to increase the scope of the work to include FYI simulations (which will require a different atmospheric and oceanic forcing) that would also be acceptable.
- Technical: the authors do not cite which version of the Icepack code they used, but the plots suggest that the code contains a known error in the mechanical forcing that substantially changes the simulated ice concentration (https://github.com/CICE-Consortium/Icepack/pull/433). If so, the simulations need to be repeated using a version of the model that doesn’t contain this error.
- Communication: it would enhance the impact of this work if the description of the methods were accessible to a general reader of The Cryosphere who is not an expert in data assimilation. As written, I struggled to follow the critical details of the data assimilation procedure. I have tried to provide guidance here in the detailed comments below, but I recommend having a colleague who is not an expert at DA take a close read of the manuscript before resubmitting.
I recommend that the editor return the manuscript for major revisions. I think that this work has the potential to be a valuable step forward for the sea ice community and I would welcome reviewing a revised version.
Detailed comments:
Line 25: please provide a reference for the climate model application of DA.
Lines 29-34: The explanation of the steps of DA is hard to follow for someone who does not regularly use DA. For example, it’s not clear how the second step “updating the model’s estimates of the observed quantity” relates to the first “generate an ensemble of forecasts”. Does “update” refer to changing the state variables in each of the ensemble members, selecting the ensemble member that most closely matches the observations, or something else? Because there are numerous Kalman-related data assimilation techniques, it would be helpful if the explanation here could be geared towards a reader without a background in DA and include a representative example of applying the steps in this work (e.g., updating per-category ice concentration, or similar).
Line 44: Is the term “analysis inaccuracy” specific jargon in DA? Define if so. Otherwise, I think the term “bias” is more readily understood (by the modeling community at least) to mean a systematic error in an ensemble mean.
Line 47: The use of “Secondly…” confused me because I don’t see a “firstly”. I’d recommend a different transition.
Line 50: Mechanical properties as well as thermodynamic.
Lines 57-61: Related to the comment on Lines 29-34, a flow-chart or some other figure displaying the steps of DA as used in this work, and how aggregation and boundedness play a role in those steps would make this clearer for readers. This flow-chart could also highlight the innovation of this work compared to prior research.
Line 81: Using the term “observed” for a quantity produced entirely synthetically seems likely to cause confusion with readers. Is there another term in the DA literature that can be used? If not, it seems worth defining one.
Lines 84-86: The description of the different grid cells is slightly misleading unless this is using a non-standard version of Icepack. The “open ocean”, “slab ice”, and “categorized ITD” cells all represent the ice cover using an ITD. The difference is purely in the initial conditions (e.g., for slab all, of the ice starts in a single thickness category). The different grid cells are irrelevant to this work so I would just remove the description.
Lines 88-89: What does “tuned to the atmospheric forcing” mean? I generally think of tuning as bringing the model state in line with observations, but that is not apparent in this description.
Line 90: The phrase “discontinuous behavior in ice concentration related to ice-albedo feedback during the melt season” needs to be explained. How large of a change in ice concentration in a time step is required to be “discontinuous”? If this is a bug in the model it needs to be fixed, as it may introduce other more subtle errors that are not remedied by setting R_snw = -2. If it is expected behavior for the model, why exclude it? Note that Figure 1 shows what appears to be discontinuous behavior during freeze-up.
Line 92: The default values in Icepack and the code itself change over time. Please cite the specific tag or release of the code used. Also, it would help the casual reader to specify a few more details about the Icepack setup: at minimum which schemes are being used for thermodynamics, shortwave, and melt ponds. Is a dynamics forcing being used? If so, it should be cited.
Relatedly, I suspect the version of Icepack that was used in this work contained an error related to how the dynamics forcing was implemented (https://github.com/CICE-Consortium/Icepack/pull/433). Note how in the summertime in the Fig. 2 TRUTH simulation (a single ensemble member) the ice concentration and thickness change monotonically and there is no evidence of synoptic scale variations due to ice divergence and convergence (which are present in the SHEBA ice dynamics forcing data that are often used to test Icepack). This error substantially changes the simulated ice concentration. If so, the results need to be recomputed without the error.
Lines 92-93: How is mixed layer salinity being prescribed? The mushy layer physics can be sensitive to this.
Figure 1: the right middle and bottom plots appear to be incorrect. If the y-axes are the same as on the left plots then the thickness (ice or snow) in each category is less than the mean, which cannot be. It looks like what is plotted is actually per-category volume. I recommend plotting the per-category thickness as that is more intuitive for readers who are not CICE experts and more directly comparable with observations. Also, the ice thickness (and maybe snow depth?) appear to be ice-area weighted quantities (I’m guessing from the sudden drop during freeze-up). If so, this needs be stated explicitly.
Line 95: Why this particular location? If the sea ice is supposed to seasonally advance and retreat from this location, then why doesn’t Figure 1 show ice thickness and area fraction declining to zero (or at least very close) in the summer?
Line 97: Why conduct a 10 year spin up? Is it needed for the DA?
Line 105: See comments above related to making the steps of DA more understandable for the non-expert.
Line 109: Increases relative to what?
Lines 137-142: What is the basis for these observational uncertainties? Please cite studies showing this much uncertainty is present in the observations for a grid cell of this size (added question, what size grid cell?) and, for quantities that lack uncertainty estimates explain the reasoning behind the error distributions that were chosen.
Lines 157-158: If this is a known issue with atmospheric perturbations then why not perturb other forcings? A perturbation in the ice dynamics forcing, for example could produce a very dispersive ensemble. Please discuss the choices about perturbations and their impacts on the findings in the discussion.
Line 205: If this is a standard statistical technique for this application please cite. Otherwise include a brief justification of why this statistical approach was taken.
Line 210: To assess the impacts of these results, it is critical that the prescribed observational uncertainties match our actual observational uncertainties. It is not surprising that assimilating thickness is more informative than concentration for a case with thick (2m average) ice and very high ice concentration. However, our ability to measure thickness is much, much worse than concentration. Just by eye, it looks to me like the observational uncertainties prescribed in Figure 2 are overestimated for concentration and underestimated for thickness. At least for the winter, I would expect observational error in sea ice concentration in the pack ice on a 100 km grid cell to be around 0.03. In Figure 2 it looks closer to 0.1. For thickness, the summertime uncertainty should be dramatically more than the wintertime whether one is using laser or radar altimeters.
Line 217: The statement reads like this is a success, but shouldn’t the results lie closer to the “TRUTH”? If the results were the same as FREE isn’t that just an indication that the assimilation does not increase skill? Please clarify.
Figure 4: Should the caption read “assimilating with bounded algorithms”?
Line 223: I assume that by “implicitly” you are referring to the aicen assimilation, but please spell that out if so.
Lines 224-229: If they are not statistically significantly different, then the preceding discussion of insignificant differences is not needed.
Lines 231: Cite the other studies that have found this.
Line 280: restriction on ridging should be described in the methods.
Lines 280-286: The ridging turned off results took me by surprise and it’s unclear to me what we are learning from them. If they are critical to the overall findings, please expand the methods (how was ridging turned off), results (what other changes did it cause in the ice state, I assume they are large), and discussion. In its current form I would recommend removing the ridging turned off section entirely.
Lines 293-294: Where were the simulations done in first-year ice regimes? Simulations with no dynamics forcing (I presume that’s how ridging is turned off?) are not the same as first-year ice regimes, even if the annual mean thickness is similar.
Line 296: This is an overly broad statement to make from a single grid cell of multi-year ice, with a single ice dynamics and ocean forcing. Please qualify.
Line 305: I do not see the thin ice simulations that this is referring to. If this is a critical finding from this work, it needs to be supported by simulations of seasonal ice. This should be straightforward to accomplish with the existing code infrastructure (i.e., pick a more southerly point for the atmospheric and oceanic forcing), but would entail significant additional work. Alternatively, the findings could be rewritten to describe just the presented work and simulations in seasonal ice conditions could be recommended as future work.
Line 332: Is this really needed? The aicen lines on the figure seem to overlap with the “TRUTH” almost entirely. At what point is a DA system good enough?
Line 334: It would be helpful to the field to expand the description of what kind of targeted covariance study is needed. Based on your findings, what should field scientists, remote sensing researchers, and modelers do next?
Citation: https://doi.org/10.5194/egusphere-2023-2016-RC1 -
AC1: 'Reply on RC1', Molly Wieringa, 01 May 2024
The authors would like to thank the editor and the anonymous referees for the time and effort that have gone toward providing feedback on this manuscript. Please find attached the authors’ point-by-point responses to referee comments, questions, and concerns. Unless otherwise noted, all page numbers refer to the revised manuscript.
-
RC2: 'Comment on egusphere-2023-2016', Anonymous Referee #2, 25 Mar 2024
The authors present a framework for hypothesis testing in sea ice data assimilation (DA). Sea ice DA is complicated by the bounds on the sea ice variables, i.e., SIC and SIT should be greater than zero and SIC should be less than or equal to one. The single column sea ice model Icepack and the Data Assimilation Research Testbed (DART) are used. Non-Gaussian error covariances are tested for SIC, SIT, and category-based assimilation.
The paper is well written, relatively easy to understand (given the relatively complicated topic), and definitely deserves publication. I thank the authors for this nice work! I enjoyed reading it.
However, any paper can be improved. Below I have listed a few points that I would like to address. Overall, I rate them as minor revisions.
- I miss a statement in the introduction (and the abstract) that no 'real' observations are used, but perfect model studies, i.e. Observing System Simulation Experiments, are performed. This should be made clear from the start.
- Line 76: I would add that DART will be explained in section 2.2.
- Line 77: To call Icepack a "single column version of the CICE sea ice model" sounds a bit strange, because the Icepack documentation says "The column physics package of the CICE sea ice model, ‘Icepack’ ...", i.e. Icepack is not a specific version of CICE, but an integral part of CICE. That might cause confusing as well as calling the data assimilation framework “CICE-SCM-DART”.
- Line 88: Explain briefly what CAM6 is.
- Line 89: It would help the reader if a little bit more was said about the consequences of “setting the snow grain size paeameter to a value of -2”. Why does this choice “prevents discontinuous behavior in ice concentration related to ice-albedo feedback during the melt season”?
- Line 93: What does "are consistent" mean? I assume it means that the values are the same for all 30 members, right?
- Line 126: “The use of bounded normal rank histogram (BNRH) distributions in state-space regression is addressed in (Anderson, 2023)”: I would prefer to read here a few sentences about the main findings of Anderson (2023).
- Line 145: Table 2 is repeating many information four times (i.e. obs. Kind and obs. Error). I suggest to split the table into two – one naming the experiments and the other describing the obs. Error (which is the same for all experiments).
- Table 2: The obs. Error of SIT (10% of SIT value) is unrealistic low at least when compared to obs. Errors from altimetry (see e.g. Figure 2b in https://tc.copernicus.org/articles/11/1607/2017/tc-11-1607-2017.pdf). Any comment on that?
- Line 166: I found the sentence “As a result, an observation from any of individual ITD categories is prevented from updating any state-space variable not also in that same ITD category.” difficult to understand. A reformulation of the sentence helped me: "As a result, an observation from any of the individual ITD categories is prevented from updating any state-space variable that is not also in the same ITD category".
- Line 174: “CICE rebalancing option” – please explain briefly what that is!
- Line 190: Is NSE not more commonly used as CE? - see https://en.wikipedia.org/wiki/Nash%E2%80%93Sutcliffe_model_efficiency_coefficient
- Line 204: iCE is to me not more intuitive!
- Figure 4: I question the usefulness of discussing the snow depth variable in the manuscript. To me it is just a way of diluting the results without learning anything essential.
- line 265: “,the dependency”. White space is missing!
- Line 290: It is true that SIT summer observations were missing (at least space filling) , but I would point to the forthcoming data products (e.g. https://www.nature.com/articles/s41586-022-05058-5).
Citation: https://doi.org/10.5194/egusphere-2023-2016-RC2 -
AC2: 'Reply on RC2', Molly Wieringa, 01 May 2024
The authors would like to thank the editor and two anonymous referees for the time and effort that have gone toward providing feedback on this manuscript. Please find attached the authors’ point-by-point responses to referee comments, questions, and concerns. Unless otherwise noted, all page numbers refer to the revised manuscript.
-
AC2: 'Reply on RC2', Molly Wieringa, 01 May 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2016', Anonymous Referee #1, 25 Oct 2023
Overall comments:
“Bounded and categorized: targeting data assimilation for sea ice fractional coverage and non-negative quantities in a single column multi-category sea ice model” tests two data assimilation approaches on single column sea ice model (Icepack, the column physics of CICE) in a perfect-model framework. Assimilating per-category quantities (ice area and volume) is found to substantially improve the skill of the modeling system to be on par with assimilating sea ice thickness and superior to assimilating concentration: which, in the standard, unbounded approach actually performs worse than conducting no assimilation. Bounded data assimilation only shows benefits compared to unbounded when assimilating concentration, and even then, the results are no better than conducting no assimilation. For other parameters bounded assimilation does not yield a net improvement, because the ‘unbounded’ cases are effectively bounded in post-processing. The results, especially the per-category ice area findings, are of broad interest to the sea ice community and merit publication in The Cryosphere. However, there are scientific, technical, and communication issues that need to be addressed before publication.
- Scientific: the primary scientific issue that needs to be addressed is that the “observational uncertainties” applied to ice concentration, thickness, and category variables need to be scientifically justified for the findings to be meaningful. It would maximize the impacts of the work if the justification were based on our real or anticipated observational capabilities (e.g., what are the uncertainties in satellite-derived sea ice concentration). However, if this is not feasible, the rationale for the prescribed uncertainties needs to be clearly explained in the text.
- Scientific: a minor point, the Discussion suggests that the findings are valid in a seasonal (first-year) ice regime, but no simulations from a first year ice regime are presented. I recommend limiting the Discussion to what is clearly supported by the presented results (i.e., not making claims about FYI). Alternatively, if the authors choose to increase the scope of the work to include FYI simulations (which will require a different atmospheric and oceanic forcing) that would also be acceptable.
- Technical: the authors do not cite which version of the Icepack code they used, but the plots suggest that the code contains a known error in the mechanical forcing that substantially changes the simulated ice concentration (https://github.com/CICE-Consortium/Icepack/pull/433). If so, the simulations need to be repeated using a version of the model that doesn’t contain this error.
- Communication: it would enhance the impact of this work if the description of the methods were accessible to a general reader of The Cryosphere who is not an expert in data assimilation. As written, I struggled to follow the critical details of the data assimilation procedure. I have tried to provide guidance here in the detailed comments below, but I recommend having a colleague who is not an expert at DA take a close read of the manuscript before resubmitting.
I recommend that the editor return the manuscript for major revisions. I think that this work has the potential to be a valuable step forward for the sea ice community and I would welcome reviewing a revised version.
Detailed comments:
Line 25: please provide a reference for the climate model application of DA.
Lines 29-34: The explanation of the steps of DA is hard to follow for someone who does not regularly use DA. For example, it’s not clear how the second step “updating the model’s estimates of the observed quantity” relates to the first “generate an ensemble of forecasts”. Does “update” refer to changing the state variables in each of the ensemble members, selecting the ensemble member that most closely matches the observations, or something else? Because there are numerous Kalman-related data assimilation techniques, it would be helpful if the explanation here could be geared towards a reader without a background in DA and include a representative example of applying the steps in this work (e.g., updating per-category ice concentration, or similar).
Line 44: Is the term “analysis inaccuracy” specific jargon in DA? Define if so. Otherwise, I think the term “bias” is more readily understood (by the modeling community at least) to mean a systematic error in an ensemble mean.
Line 47: The use of “Secondly…” confused me because I don’t see a “firstly”. I’d recommend a different transition.
Line 50: Mechanical properties as well as thermodynamic.
Lines 57-61: Related to the comment on Lines 29-34, a flow-chart or some other figure displaying the steps of DA as used in this work, and how aggregation and boundedness play a role in those steps would make this clearer for readers. This flow-chart could also highlight the innovation of this work compared to prior research.
Line 81: Using the term “observed” for a quantity produced entirely synthetically seems likely to cause confusion with readers. Is there another term in the DA literature that can be used? If not, it seems worth defining one.
Lines 84-86: The description of the different grid cells is slightly misleading unless this is using a non-standard version of Icepack. The “open ocean”, “slab ice”, and “categorized ITD” cells all represent the ice cover using an ITD. The difference is purely in the initial conditions (e.g., for slab all, of the ice starts in a single thickness category). The different grid cells are irrelevant to this work so I would just remove the description.
Lines 88-89: What does “tuned to the atmospheric forcing” mean? I generally think of tuning as bringing the model state in line with observations, but that is not apparent in this description.
Line 90: The phrase “discontinuous behavior in ice concentration related to ice-albedo feedback during the melt season” needs to be explained. How large of a change in ice concentration in a time step is required to be “discontinuous”? If this is a bug in the model it needs to be fixed, as it may introduce other more subtle errors that are not remedied by setting R_snw = -2. If it is expected behavior for the model, why exclude it? Note that Figure 1 shows what appears to be discontinuous behavior during freeze-up.
Line 92: The default values in Icepack and the code itself change over time. Please cite the specific tag or release of the code used. Also, it would help the casual reader to specify a few more details about the Icepack setup: at minimum which schemes are being used for thermodynamics, shortwave, and melt ponds. Is a dynamics forcing being used? If so, it should be cited.
Relatedly, I suspect the version of Icepack that was used in this work contained an error related to how the dynamics forcing was implemented (https://github.com/CICE-Consortium/Icepack/pull/433). Note how in the summertime in the Fig. 2 TRUTH simulation (a single ensemble member) the ice concentration and thickness change monotonically and there is no evidence of synoptic scale variations due to ice divergence and convergence (which are present in the SHEBA ice dynamics forcing data that are often used to test Icepack). This error substantially changes the simulated ice concentration. If so, the results need to be recomputed without the error.
Lines 92-93: How is mixed layer salinity being prescribed? The mushy layer physics can be sensitive to this.
Figure 1: the right middle and bottom plots appear to be incorrect. If the y-axes are the same as on the left plots then the thickness (ice or snow) in each category is less than the mean, which cannot be. It looks like what is plotted is actually per-category volume. I recommend plotting the per-category thickness as that is more intuitive for readers who are not CICE experts and more directly comparable with observations. Also, the ice thickness (and maybe snow depth?) appear to be ice-area weighted quantities (I’m guessing from the sudden drop during freeze-up). If so, this needs be stated explicitly.
Line 95: Why this particular location? If the sea ice is supposed to seasonally advance and retreat from this location, then why doesn’t Figure 1 show ice thickness and area fraction declining to zero (or at least very close) in the summer?
Line 97: Why conduct a 10 year spin up? Is it needed for the DA?
Line 105: See comments above related to making the steps of DA more understandable for the non-expert.
Line 109: Increases relative to what?
Lines 137-142: What is the basis for these observational uncertainties? Please cite studies showing this much uncertainty is present in the observations for a grid cell of this size (added question, what size grid cell?) and, for quantities that lack uncertainty estimates explain the reasoning behind the error distributions that were chosen.
Lines 157-158: If this is a known issue with atmospheric perturbations then why not perturb other forcings? A perturbation in the ice dynamics forcing, for example could produce a very dispersive ensemble. Please discuss the choices about perturbations and their impacts on the findings in the discussion.
Line 205: If this is a standard statistical technique for this application please cite. Otherwise include a brief justification of why this statistical approach was taken.
Line 210: To assess the impacts of these results, it is critical that the prescribed observational uncertainties match our actual observational uncertainties. It is not surprising that assimilating thickness is more informative than concentration for a case with thick (2m average) ice and very high ice concentration. However, our ability to measure thickness is much, much worse than concentration. Just by eye, it looks to me like the observational uncertainties prescribed in Figure 2 are overestimated for concentration and underestimated for thickness. At least for the winter, I would expect observational error in sea ice concentration in the pack ice on a 100 km grid cell to be around 0.03. In Figure 2 it looks closer to 0.1. For thickness, the summertime uncertainty should be dramatically more than the wintertime whether one is using laser or radar altimeters.
Line 217: The statement reads like this is a success, but shouldn’t the results lie closer to the “TRUTH”? If the results were the same as FREE isn’t that just an indication that the assimilation does not increase skill? Please clarify.
Figure 4: Should the caption read “assimilating with bounded algorithms”?
Line 223: I assume that by “implicitly” you are referring to the aicen assimilation, but please spell that out if so.
Lines 224-229: If they are not statistically significantly different, then the preceding discussion of insignificant differences is not needed.
Lines 231: Cite the other studies that have found this.
Line 280: restriction on ridging should be described in the methods.
Lines 280-286: The ridging turned off results took me by surprise and it’s unclear to me what we are learning from them. If they are critical to the overall findings, please expand the methods (how was ridging turned off), results (what other changes did it cause in the ice state, I assume they are large), and discussion. In its current form I would recommend removing the ridging turned off section entirely.
Lines 293-294: Where were the simulations done in first-year ice regimes? Simulations with no dynamics forcing (I presume that’s how ridging is turned off?) are not the same as first-year ice regimes, even if the annual mean thickness is similar.
Line 296: This is an overly broad statement to make from a single grid cell of multi-year ice, with a single ice dynamics and ocean forcing. Please qualify.
Line 305: I do not see the thin ice simulations that this is referring to. If this is a critical finding from this work, it needs to be supported by simulations of seasonal ice. This should be straightforward to accomplish with the existing code infrastructure (i.e., pick a more southerly point for the atmospheric and oceanic forcing), but would entail significant additional work. Alternatively, the findings could be rewritten to describe just the presented work and simulations in seasonal ice conditions could be recommended as future work.
Line 332: Is this really needed? The aicen lines on the figure seem to overlap with the “TRUTH” almost entirely. At what point is a DA system good enough?
Line 334: It would be helpful to the field to expand the description of what kind of targeted covariance study is needed. Based on your findings, what should field scientists, remote sensing researchers, and modelers do next?
Citation: https://doi.org/10.5194/egusphere-2023-2016-RC1 -
AC1: 'Reply on RC1', Molly Wieringa, 01 May 2024
The authors would like to thank the editor and the anonymous referees for the time and effort that have gone toward providing feedback on this manuscript. Please find attached the authors’ point-by-point responses to referee comments, questions, and concerns. Unless otherwise noted, all page numbers refer to the revised manuscript.
-
RC2: 'Comment on egusphere-2023-2016', Anonymous Referee #2, 25 Mar 2024
The authors present a framework for hypothesis testing in sea ice data assimilation (DA). Sea ice DA is complicated by the bounds on the sea ice variables, i.e., SIC and SIT should be greater than zero and SIC should be less than or equal to one. The single column sea ice model Icepack and the Data Assimilation Research Testbed (DART) are used. Non-Gaussian error covariances are tested for SIC, SIT, and category-based assimilation.
The paper is well written, relatively easy to understand (given the relatively complicated topic), and definitely deserves publication. I thank the authors for this nice work! I enjoyed reading it.
However, any paper can be improved. Below I have listed a few points that I would like to address. Overall, I rate them as minor revisions.
- I miss a statement in the introduction (and the abstract) that no 'real' observations are used, but perfect model studies, i.e. Observing System Simulation Experiments, are performed. This should be made clear from the start.
- Line 76: I would add that DART will be explained in section 2.2.
- Line 77: To call Icepack a "single column version of the CICE sea ice model" sounds a bit strange, because the Icepack documentation says "The column physics package of the CICE sea ice model, ‘Icepack’ ...", i.e. Icepack is not a specific version of CICE, but an integral part of CICE. That might cause confusing as well as calling the data assimilation framework “CICE-SCM-DART”.
- Line 88: Explain briefly what CAM6 is.
- Line 89: It would help the reader if a little bit more was said about the consequences of “setting the snow grain size paeameter to a value of -2”. Why does this choice “prevents discontinuous behavior in ice concentration related to ice-albedo feedback during the melt season”?
- Line 93: What does "are consistent" mean? I assume it means that the values are the same for all 30 members, right?
- Line 126: “The use of bounded normal rank histogram (BNRH) distributions in state-space regression is addressed in (Anderson, 2023)”: I would prefer to read here a few sentences about the main findings of Anderson (2023).
- Line 145: Table 2 is repeating many information four times (i.e. obs. Kind and obs. Error). I suggest to split the table into two – one naming the experiments and the other describing the obs. Error (which is the same for all experiments).
- Table 2: The obs. Error of SIT (10% of SIT value) is unrealistic low at least when compared to obs. Errors from altimetry (see e.g. Figure 2b in https://tc.copernicus.org/articles/11/1607/2017/tc-11-1607-2017.pdf). Any comment on that?
- Line 166: I found the sentence “As a result, an observation from any of individual ITD categories is prevented from updating any state-space variable not also in that same ITD category.” difficult to understand. A reformulation of the sentence helped me: "As a result, an observation from any of the individual ITD categories is prevented from updating any state-space variable that is not also in the same ITD category".
- Line 174: “CICE rebalancing option” – please explain briefly what that is!
- Line 190: Is NSE not more commonly used as CE? - see https://en.wikipedia.org/wiki/Nash%E2%80%93Sutcliffe_model_efficiency_coefficient
- Line 204: iCE is to me not more intuitive!
- Figure 4: I question the usefulness of discussing the snow depth variable in the manuscript. To me it is just a way of diluting the results without learning anything essential.
- line 265: “,the dependency”. White space is missing!
- Line 290: It is true that SIT summer observations were missing (at least space filling) , but I would point to the forthcoming data products (e.g. https://www.nature.com/articles/s41586-022-05058-5).
Citation: https://doi.org/10.5194/egusphere-2023-2016-RC2 -
AC2: 'Reply on RC2', Molly Wieringa, 01 May 2024
The authors would like to thank the editor and two anonymous referees for the time and effort that have gone toward providing feedback on this manuscript. Please find attached the authors’ point-by-point responses to referee comments, questions, and concerns. Unless otherwise noted, all page numbers refer to the revised manuscript.
-
AC2: 'Reply on RC2', Molly Wieringa, 01 May 2024
Peer review completion
Journal article(s) based on this preprint
Model code and software
CICE-SCM-DART Molly Wieringa https://doi.org/10.5281/zenodo.8310112
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
459 | 146 | 47 | 652 | 33 | 28 |
- HTML: 459
- PDF: 146
- XML: 47
- Total: 652
- BibTeX: 33
- EndNote: 28
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Christopher Riedel
Jeffrey Anderson
Cecilia Bitz
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5354 KB) - Metadata XML