the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Tuning parameters of a sea ice model using machine learning
Abstract. We developed a new method for tuning sea ice rheology parameters, which consists of two components: a new metric for characterising sea ice deformation patterns and an ML-based approach for tuning rheology parameters. We applied the new method to tune the parametrisation of the brittle Bingham-Maxwell rheology (BBM) implemented and used in the next-generation sea-ice model (neXtSIM). As a reference dataset, we used sea ice drift and deformation observations from the Radarsat Geophysical Processing System (RGPS).
The metric characterises a field of sea ice deformation with a vector of values. It includes well-established descriptors such as the mean and standard deviation of deformation, the structure-function of the spatial scaling analysis, and the density and intersection of linear kinematic features (LKFs). We added more descriptors to the metric that characterise the pattern of ice deformation, including image anisotropy and Haralick texture features. The developed metric can describe ice deformation from any model or satellite platform.
In the parameter tuning method, we first run an ensemble of neXtSIM members with perturbed rheology parameters and then train a machine-learning model using the simulated data. We provide the descriptors of ice deformation as input to the ML model and rheology parameters as targets. We apply the trained ML model to the descriptors computed from RGPS observations. The developed ML-based method is generic and can be used to tune the parameters of any model.
We ran experiments with tens of members and found optimal values for four neXtSIM BBM parameters: scaling parameter for ridging (P0 ≈ 5.1 kPa), cohesion at the reference scale (cref ≈ 1.2 MPa), internal friction angle tangent (µ ≈ 0.7), ice–atmosphere drag coefficient (CA ≈ 0.00228). A NeXtSIM run with the optimal parametrisation produces maps of sea ice deformation visually indistinguishable from the RGPS observations. These parameters exhibit weak interannual drift related to changes in sea ice thickness and corresponding changes in ice deformation patterns.
- Preprint
(17665 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 21 Nov 2024)
-
RC1: 'Comment on egusphere-2024-2527', William Gregory, 11 Nov 2024
reply
Summary and Decision:
The manuscript “Tuning parameters of a sea ice model using machine learning” by Korosov et al presents a novel approach of using remote sensing data and machine learning to optimize sea ice rheology parameters within the neXtSIM sea ice model. The approach relies on identifying a set of “descriptors” which characterize the main sea ice deformation patterns within neXtSIM simulations, and training an ML model to create functional mappings from these deformation patterns, to sea ice rheology parameters. Once trained, the same sea ice deformation patterns can be computed from remote sensing data and passed to the ML model to estimate “true” parameters. Overall I think this manuscript is well written and does well to motivate the study and provide background on recent developments in sea ice models adopting brittle rheologies. To my knowledge, their ML-based approach to tuning parameters is unique and should lead to interesting avenues for future model parameter estimation activities. I would only like to see more in the manuscript about the author’s thoughts on generalization and some more specific details of the ML model. My recommendation is publish with minor revisions. My thanks to the authors for their nice work.
General comments:
- Just an overall comment about figures. I find it hard to identify the “take-home message” of many of the figures. Some figures contain a lot of panels (figs 3 and 8) and others contain little supporting information, such as color scales or notation to tell the reader what they should be focusing on.
- In Figure 1B, were data within each shaded area collected at the same time (i.e. is this the size of the SAR footprint)? Or is each area the average over some short time window? Without a color scale I also can’t tell relative timing of areas.
- In Figure 8 it’s hard to identify which panels show “good” and “bad” distributions, and looking closely at every single panel will probably give the reader “figure fatigue”. Maybe specific good/bad pdfs can be highlighted somehow?
- To me, Figure 14 is the key figure of the paper, and I think this should also have panels showing neXtSIM with “default” parameters, so you then have panels: i) obs, ii) control, iii) ML-tuned. With this you can maybe then remove figure 7 altogether. Similarly, I think figures 3 and 4 can probably be moved to SI or removed.
- Section 4.6 describes training the ML model. Here it is mentioned that training/test data are split randomly. I was wondering here if the authors have any thought on whether this could lead to data leakage? I.e. having a training and test samples which are neighboring in time. Also could you provide more details of the DNN architecture? Are neurons fully connected? Which activation functions are used?
- Section 5.5. Am I correct that in this section you evaluate the performance of the ML-predicted rheology parameters over the entire 1990-2008 period, based on an ML model which has only been trained on winter 2007-2008 data? I would expect the performance to be quite poor here given the limited training data. It would therefore be interesting to see some analysis of offline training/validation error over this extended period. E.g if you did something like a 80/10/10 training/validation/test split over the 1990-2008 period, does the ML model perform much better than just training on 2006-2007? Also is training on year-round data critical for capturing the seasonal cycle?
- How do the ML-tuned parameters affect biases in other state variables, such as sea ice thickness? Have you checked changes in thickness and compared to observations? E.g. if you run simulations over the 2010—present period with new parameters, is sea ice thickness more in-line with CryoSat-2?
- A last general comment: I believe all simulations in this study are regional (central Arctic), perhaps to align broadly with the zone of observational data covered by RadarSat/RGPS? Do the author have any thoughts on how well they expect the ML parameters to generalize to global simulations? (I’m thinking mainly about the Marginal Ice Zone and also to the Antarctic).
Minor comments:
- L107: I recommend citing some works which have explored Kalman Filters for sea ice parameter estimation (see references below - one of which has been applied in an MEB rheology)
- L129: are F and H equivalent functions (eq 12 and eq 10)? Can F be swapped for H to be consistent?
- L266/267: Is there not a metric we can use to get a quantitative sense of the similarity between model and obs? E.g. Spatial pattern correlation? Or maybe Spectra?
The manuscript appears to have very few grammatical errors.
References:
Massonnet, F., Goosse, H., Fichefet, T. and Counillon, F., 2014. Calibration of sea ice dynamic parameters in an ocean‐sea ice model using an ensemble Kalman filter. JGR: Oceans, 119(7), pp.4168-4184.
Zhang, Y.F., Bitz, C.M., Anderson, J.L., Collins, N.S., Hoar, T.J., Raeder, K.D. and Blanchard-Wrigglesworth, E., 2021. Estimating parameters in a sea ice model using an ensemble Kalman filter. The Cryosphere 15(3), pp.1277-1284.
Chen, Y., Smith, P., Carrassi, A., Pasmans, I., Bertino, L., Bocquet, M., Finn, T.S., Rampal, P. and Dansereau, V., 2024. Multivariate state and parameter estimation with data assimilation applied to sea-ice models using a Maxwell elasto-brittle rheology. The Cryosphere, 18(5), pp.2381-2406.
Citation: https://doi.org/10.5194/egusphere-2024-2527-RC1 -
RC2: 'Comment on egusphere-2024-2527', Anonymous Referee #2, 12 Nov 2024
reply
Review
Tuning parameters of a sea ice model using machine learning, Korosov et al. (2024)
Summary
The authors develop a new set of metrics for characterising the patterns in fields of sea ice deformation with a vector of values. These metrics are used, via a machine-learning (ML) method, to tune the parameters used in the brittle Bingham-Maxwell (BBM) sea ice rheology, as implemented within the neXtSIM sea ice model. A model run using the optimised set of rheology parameters is able to produce maps of sea ice deformation which are similar to observations obtained from the Radarsat Geophysical Processing System (RGPS).
General comments
The paper is well-written and easy to follow. I enjoyed reading it and the results are very interesting, with broader applications to different sea ice models and rheology types. I am pleased to say that I have only minor comments and suggestions (see below), mostly relating to the addition of clarifying information.
Minor comments
Line 27: Add a sentence on the EAP rheology in this paragraph
Line 49: Can Fig. 2(a) from Olason et al. (2022) be reproduced here in some way? It is hard to visualise it without looking it up.
Line 54: Definition of “d” (damage) is missing
Line 58: Definition of “Pmax” (elastic limit) is missing. Would be useful for the reader if the term “ice strength” was used in relation to this in the paper as well, for clarity.
Line 68: Define “h”
Line 79: Missing a closing bracket after “time”. Define “α” here too (referred to as damage parameter in Table 1)
Line 80: Add comma between “envelope” and “or yield curve”
Line 82: Define τ and θN (stress invariants), equation 7.
Table 1:
Remove ??? after “Reference thickness”
Compaction parameter, C: Should this be negative? In equation (2) there is a minus C, but that means C itself should be positive (I may have misinterpreted here)
For P0 (scaling parameter for ridging), is this not the scaling parameter for the sea ice compressive strength, rather than for the ridging directly? So the resistance to ridging… Suggest renaming “scaling parameter for ice strength” or similar
Ca also appears as CA in several places in the text, needs to be consistent. Can a line be added to the text to explain why this is included as a rheology parameter?
Line 87: Schulson et al. reference should be in brackets
Line 89: What does “proper tuning” mean? Expand on this to link into this study and provide further justification for the work.
Figure 1: Would be useful to include a legend to explain the colours. Correct the duplication of the word “images” in caption.
Line 127: Define the operator “F”
Line 136: Need a reference for Latin Hypercube
Line 137: Were these parameters perturbed using the same method?
Figure 2: I don’t think this figure is currently referred to in the text. I think “H” between “observed ice drift” and “observed descriptors” should be “F”. Suggest also defining M, H, and F in the figure caption as well for clarity.
Figure 3: Suggest larger font on figure axes to improve readability, and include more description in the figure caption.
Line 216: add definition of P90
Lines 217-8: “mean and P90 of image anisotropy” – says median in the table, is this correct? Also line 163 only refers to mean.
Line 221: Include years with the dates given
Line 229: How many descriptors were rejected by this method?
Line 254: Need reference for “Adam optimiser”
Line 259: “sea deformation” should be “sea ice deformation”
Figure 7 caption: what is “tree days snapshots”?
Lines 281-2: This is a repeat of information in lines 106-7, remove one of these instances.
Figure 10: What are the error bars? The a90_00 error bar is very large, and this descriptor is excluded by the next test (Figure 11). Would be useful to see this link pointed out in the text and, more generally, it would also be useful to know how much overlap there is in the descriptors eliminated by the different methods. How many are rejected by each method?
Figure 12: Are the black lines actual trend lines or 1:1 lines? Clarify in figure caption. It would be helpful to show the r values on the plots themselves.
Lines 315-7: The qualitative assessment using Figure 14 is useful, but the similarity between the optimised neXtSIM run and the RGPS reference data should be quantified (using a metric of your choice). Also, can the optimised values of the parameters be shown in Table 1 alongside the originals for comparison?
Line 339: The negative values are seen in which parameters?
Line 341: AC should be CA (or Ca)
Line 351: Suggest rewording “requires better values of H and C parameters” to “requires optimised tuning of H and C parameters”.
Line 351: Unclear how tuning the parameters would make the rheology independent of ice thickness (also line 371). Would we not expect to see changes in the sea ice deformation patterns related to the thickness (and strength) of the sea ice?
Line 351: Can this motivation be included earlier?
Figure 14: Font size on the colourbars needs to be increased for legibility
Lines 354-5: Refer to figure 15 here
Figure 15: Rotate text on overlapping dates for legibility
Citation: https://doi.org/10.5194/egusphere-2024-2527-RC2
Data sets
Outputs of the next generation sea ice model (neXtSIM) for winter 2006 - 2007 saved for comparison with RGPS Anton Korosov https://doi.org/10.5281/zenodo.13302007
Model code and software
Sea ice drift deformation analysis software, pysida-0.1 Anton Korosov https://doi.org/10.5281/zenodo.13301869
Interactive computing environment
NeXtSIM parameter tuning software, nextsimtuning-0.1 Anton Korosov https://doi.org/10.5281/zenodo.13302227
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
144 | 39 | 11 | 194 | 5 | 4 |
- HTML: 144
- PDF: 39
- XML: 11
- Total: 194
- BibTeX: 5
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1