the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Multivariate state and parameter estimation with data assimilation on sea-ice models using a Maxwell-Elasto-Brittle rheology
Abstract. In this study, we investigate the fully multivariate state and parameter estimation through idealised simulations of a dynamic-only model that uses the novel Maxwell-Elasto-Brittle (MEB) sea ice rheology and in which we estimate not only the sea ice concentration, thickness and velocity, but also its level of damage, internal stress and cohesion. Specifically, we estimate the air drag coefficient and the so-called damage parameter of the MEB model. Mimicking the realistic observation network with different combinations of observations, we demonstrate that various issues can potentially arise in a complex sea ice model especially in instances for which the external forcing dominates the model forecast error growth. Even though further investigation will be needed using an operational (a coupled dynamics-thermodynamics) sea ice model, we show that, with the current observation network, it is possible to improve both the observed and unobserved model state forecast and parameters accuracy.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1066 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1066 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1809', Anonymous Referee #1, 14 Nov 2023
The authors estimate two parameters in a sea-ice model by means of state-space augmentation with an EnKF. The application is clear and useful. However, I have several concerns about the experiment setup. I recommend publishing the manuscript, after the authors have considered my comments below.
General comments:
I am concerned about how the authors chose to use their computational resources. I am not asking that the authors execute all the simulations I suggest below, but perhaps they can explain their choices a bit better.
1. Have the authors done a sensitivity ensemble free run in which each member has different parameter values drawn from the assumed distribution, but share the same initial and boundary conditions? From this, one can determine for which values of the parameter the model variables are most sensitive, as well as establish which obs are important for the estimation of the respective parameters, thereby reducing the number of experiments with all the different combinations of obs. See also specific comment 4.
2. The authors aim to improve long term forecasts by estimating the model parameters. Yet, no long term forecast was computed. On which timescales does the benefit of the parameter estimation persist? And how is the spread/skill ratio for those forecasts?
3. How does the parameter estimation do compared to perfect model experiments? Because of the different techniques for inflation, one cannot compare. It therefore feels like two separate papers: one for state estimation only, and one for parameter estimation, without a clear connection. Have the authors considered running a perfect model experiment with the same inflation technique as for the parameter estimation experiments? I also wonder about the effect the different inflation techniques have.
4. I fail to understand the added value of repeating all experiments with all the different combinations of obs for all four cases. For example, it is clear a priori that SIV is important for C_a. Why run 3 experiments without SIV? To confirm the importance of SIV one should be enough (or none even). I think the computational resources could be better spent on verifying the significance of the results, since the results look somewhat noisy. For example, how does the parameter estimation do with a different realisation of the truth? Ideally one would estimate as best as possible a distribution for the parameter. In this case a Gaussian distribution N(mu_c,5 x 10 ^-4) for C_a and N(mu_a,1.5) for alpha. Then draw randomly from that distribution to get the true parameter and draw from the same distribution to get the initial ensemble. Repeat this many times for statistical significance. I understand that this may be too expensive, so, as an alternative, one could pick at least a few values for the truth. For example, +sigma, +2 sigma, -sigma and -2sigma. Or other values based on sensitivity studies (see the first general comment). Basically, how wrong does the model have to be to get improvement from the parameter estimation.
5. Based on the inset of Figure 3a, the discussion on violation of SIC bounds when SIT is observed, and the paragraph starting on page 17, line 391: Instead of running the SIC+SIT30+SIV experiments, have the authors considered to not allow SIT measurements to affect SIC (and maybe damage)? Basically setting the covariances between SIT and SIC (and damage) to zero only for SIT measurements.
Specific comments:1. page 4, line 99: Why the additional constraint?
2. page 7, line 187: What does DA-only mean in this context?
3. Section 5.1: I suggest moving this section to section 4.3.2, as it is meant as tuning for the experimental setup. Naturally, it is up to the authors.
4. Table 5. How much does the mean change? Based on the low increase in spread for alpha, I am surprised the estimation works so well. I am guessing that the mean of certain variables changes significantly when changing the parameters? See also general comment 1.
5. For Figures 4, 7, 12 and 14, I suggest using a colour bar that goes as far in red as blue. I understand that the red would be barely visible, but I think that is a good thing, because it would reflect that the deterioration of the fields is significantly less than the improvements.
6. page 17, line 387: I think the statement needs to be reversed: improvement in SIC and SIT (SIT) when SIV (SIC) is assimilated. Also, I am confused about the statement about the comparison between Figure 3 and 4. The two figures seem to be consistent for the chosen localisation radius except when assimilating SIT only. In figure 4 it seems to have a neutral effect on SIC, in contrast to the inset of Figure 3a.
7. Table 7: I personally lack intuition about what the actual values mean in Table 7. Perhaps small histograms of correlation values or small scatter plots would be useful here.
8. Figure 8: The colours of the free run and the forecast are too similar.
9. page 27, line 574: Isn't the reason the estimation works well when alpha is estimated only after 10 days, simply because C_a is already significantly more accurate and therefore the parameter estimation problem converges to another (better) local minimum? In other words, the problem resembles the case 3 scenario.
Citation: https://doi.org/10.5194/egusphere-2023-1809-RC1 - AC1: 'Reply on RC1', Yumeng Chen, 30 Jan 2024
-
RC2: 'Comment on egusphere-2023-1809', Anonymous Referee #2, 07 Dec 2023
Summary
The authors investigate the fully multivariate state and parameter estimation though idealized simulations of a dynamics-only model using the MEB sea ice rheology. They employ an iterative ensemble Kalman Filter (iEnKF) DA approach with a stopping criteria set to 40 iterations. The model runs are performed with a spatial resolution of 15 km and a 30 sec timestep to ensure numerical stability while resolving propagation of damage.
Four scenarios are evaluated inferring the model physical variables 1) under a perfect model setup (truth), 2) and the drag coefficient Ca , 3) and its erroneous damage parameters α, and 4) and its erroneous Ca and α. Different inflation strategies are used for all 4 scenarios. In scenario 1, a 42-day run free ensemble without DA, is followed by a series of 30-day assimilation experiments in which 3 of the 9 fields are bounded quantities (SIC, SIT, level of damage). When only 1 field is assimilated, that field gets most of the improvement. The cross-correlation between differing quantities is examined. In scenario 2, only one observation type (SIC or SIT) is assimilated, the analysis underestimates Ca at the end of the analysis time. However, assimilating SIC was closer to truth. They found the best skill in estimating Ca when assimilating SIV alone due to its close relationship with wind forcing and Ca. In scenario 3, the assimilation of SIC or SIT alone led to and under- and over-estimation of α after 30 days. The simultaneous assimilation of SIC and SIT led to almost a full recovery of the true α value of 4. Results showed that observations of SIV can not be used to retrieve α effectively. When all types of observations were assimilated in Scenario 4, the estimate of Ca was furthest from the truth. They found that the forecast of SIV can not be improved because it is strictly constrained by the wind field while other model fields with longer timescales showed improved forecasts. They suggest that coupled DA that estimates external forcing could improve SIV.
This a well written and well referenced paper with clear thought put into designing and executing the experiments. The tables and figures are concise and easy for the reader to understand. I recommend publication with minor edits as outlined below.
General Comments:
Line 395-396: Please comment on the statement “when SIT is assimilated with SIC, the adverse effect is subdued”. This is partially true for D<0, but for d > 1, damage is at 20.93%. Please explain.
Specific Comments:
Line 45: Add the following references: (Xie et al., 2018, Blockley and Peterson (2018), Fiedler et al., 2022)
Blockley, E. W. and Peterson, K. A.: Improving Met Office seasonal predictions of Arctic sea ice using assimilation of CryoSat-2 thickness, The Cryosphere, 12, 3419–3438, https://doi.org/10.5194/tc-12-3419-2018, 2018.
Fiedler, E. K., Martin, M. J., Blockley, E., Mignac, D., Fournier, N., Ridout, A., Shepherd, A., and Tilling, R.: Assimilation of sea ice thickness derived from CryoSat-2 along-track freeboard measurements into the Met Office's Forecast Ocean Assimilation Model (FOAM), The Cryosphere, 16, 61–85, https://doi.org/10.5194/tc-16-61-2022, 2022.
Line 54: with “the” changing number…
Line 64: such “a” model
Line 70: …has not yet “been” studied extensively…
Line 87: …DA system “consisting” of
Line 96: rephrase to “methods can be found in chapter 7”
Line 105: remove “we”
Line 153: …”the” DA’s ability…
Line 164: …with “a” uniform…
Line 396: effect is subdued check…
Figure 8: “true” and “forecast” blue lines are a bit difficult to differentiate. Can one of the colors be changed?
Line 593: use of “the” ensemble OR use of “an” ensemble
Line 617: …shed “light” (not plural)
Line 642: Is there another reference to add after “Bertino, 2009”? If not, remove the “,“ before the “)”
Citation: https://doi.org/10.5194/egusphere-2023-1809-RC2 - AC2: 'Reply on RC2', Yumeng Chen, 30 Jan 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1809', Anonymous Referee #1, 14 Nov 2023
The authors estimate two parameters in a sea-ice model by means of state-space augmentation with an EnKF. The application is clear and useful. However, I have several concerns about the experiment setup. I recommend publishing the manuscript, after the authors have considered my comments below.
General comments:
I am concerned about how the authors chose to use their computational resources. I am not asking that the authors execute all the simulations I suggest below, but perhaps they can explain their choices a bit better.
1. Have the authors done a sensitivity ensemble free run in which each member has different parameter values drawn from the assumed distribution, but share the same initial and boundary conditions? From this, one can determine for which values of the parameter the model variables are most sensitive, as well as establish which obs are important for the estimation of the respective parameters, thereby reducing the number of experiments with all the different combinations of obs. See also specific comment 4.
2. The authors aim to improve long term forecasts by estimating the model parameters. Yet, no long term forecast was computed. On which timescales does the benefit of the parameter estimation persist? And how is the spread/skill ratio for those forecasts?
3. How does the parameter estimation do compared to perfect model experiments? Because of the different techniques for inflation, one cannot compare. It therefore feels like two separate papers: one for state estimation only, and one for parameter estimation, without a clear connection. Have the authors considered running a perfect model experiment with the same inflation technique as for the parameter estimation experiments? I also wonder about the effect the different inflation techniques have.
4. I fail to understand the added value of repeating all experiments with all the different combinations of obs for all four cases. For example, it is clear a priori that SIV is important for C_a. Why run 3 experiments without SIV? To confirm the importance of SIV one should be enough (or none even). I think the computational resources could be better spent on verifying the significance of the results, since the results look somewhat noisy. For example, how does the parameter estimation do with a different realisation of the truth? Ideally one would estimate as best as possible a distribution for the parameter. In this case a Gaussian distribution N(mu_c,5 x 10 ^-4) for C_a and N(mu_a,1.5) for alpha. Then draw randomly from that distribution to get the true parameter and draw from the same distribution to get the initial ensemble. Repeat this many times for statistical significance. I understand that this may be too expensive, so, as an alternative, one could pick at least a few values for the truth. For example, +sigma, +2 sigma, -sigma and -2sigma. Or other values based on sensitivity studies (see the first general comment). Basically, how wrong does the model have to be to get improvement from the parameter estimation.
5. Based on the inset of Figure 3a, the discussion on violation of SIC bounds when SIT is observed, and the paragraph starting on page 17, line 391: Instead of running the SIC+SIT30+SIV experiments, have the authors considered to not allow SIT measurements to affect SIC (and maybe damage)? Basically setting the covariances between SIT and SIC (and damage) to zero only for SIT measurements.
Specific comments:1. page 4, line 99: Why the additional constraint?
2. page 7, line 187: What does DA-only mean in this context?
3. Section 5.1: I suggest moving this section to section 4.3.2, as it is meant as tuning for the experimental setup. Naturally, it is up to the authors.
4. Table 5. How much does the mean change? Based on the low increase in spread for alpha, I am surprised the estimation works so well. I am guessing that the mean of certain variables changes significantly when changing the parameters? See also general comment 1.
5. For Figures 4, 7, 12 and 14, I suggest using a colour bar that goes as far in red as blue. I understand that the red would be barely visible, but I think that is a good thing, because it would reflect that the deterioration of the fields is significantly less than the improvements.
6. page 17, line 387: I think the statement needs to be reversed: improvement in SIC and SIT (SIT) when SIV (SIC) is assimilated. Also, I am confused about the statement about the comparison between Figure 3 and 4. The two figures seem to be consistent for the chosen localisation radius except when assimilating SIT only. In figure 4 it seems to have a neutral effect on SIC, in contrast to the inset of Figure 3a.
7. Table 7: I personally lack intuition about what the actual values mean in Table 7. Perhaps small histograms of correlation values or small scatter plots would be useful here.
8. Figure 8: The colours of the free run and the forecast are too similar.
9. page 27, line 574: Isn't the reason the estimation works well when alpha is estimated only after 10 days, simply because C_a is already significantly more accurate and therefore the parameter estimation problem converges to another (better) local minimum? In other words, the problem resembles the case 3 scenario.
Citation: https://doi.org/10.5194/egusphere-2023-1809-RC1 - AC1: 'Reply on RC1', Yumeng Chen, 30 Jan 2024
-
RC2: 'Comment on egusphere-2023-1809', Anonymous Referee #2, 07 Dec 2023
Summary
The authors investigate the fully multivariate state and parameter estimation though idealized simulations of a dynamics-only model using the MEB sea ice rheology. They employ an iterative ensemble Kalman Filter (iEnKF) DA approach with a stopping criteria set to 40 iterations. The model runs are performed with a spatial resolution of 15 km and a 30 sec timestep to ensure numerical stability while resolving propagation of damage.
Four scenarios are evaluated inferring the model physical variables 1) under a perfect model setup (truth), 2) and the drag coefficient Ca , 3) and its erroneous damage parameters α, and 4) and its erroneous Ca and α. Different inflation strategies are used for all 4 scenarios. In scenario 1, a 42-day run free ensemble without DA, is followed by a series of 30-day assimilation experiments in which 3 of the 9 fields are bounded quantities (SIC, SIT, level of damage). When only 1 field is assimilated, that field gets most of the improvement. The cross-correlation between differing quantities is examined. In scenario 2, only one observation type (SIC or SIT) is assimilated, the analysis underestimates Ca at the end of the analysis time. However, assimilating SIC was closer to truth. They found the best skill in estimating Ca when assimilating SIV alone due to its close relationship with wind forcing and Ca. In scenario 3, the assimilation of SIC or SIT alone led to and under- and over-estimation of α after 30 days. The simultaneous assimilation of SIC and SIT led to almost a full recovery of the true α value of 4. Results showed that observations of SIV can not be used to retrieve α effectively. When all types of observations were assimilated in Scenario 4, the estimate of Ca was furthest from the truth. They found that the forecast of SIV can not be improved because it is strictly constrained by the wind field while other model fields with longer timescales showed improved forecasts. They suggest that coupled DA that estimates external forcing could improve SIV.
This a well written and well referenced paper with clear thought put into designing and executing the experiments. The tables and figures are concise and easy for the reader to understand. I recommend publication with minor edits as outlined below.
General Comments:
Line 395-396: Please comment on the statement “when SIT is assimilated with SIC, the adverse effect is subdued”. This is partially true for D<0, but for d > 1, damage is at 20.93%. Please explain.
Specific Comments:
Line 45: Add the following references: (Xie et al., 2018, Blockley and Peterson (2018), Fiedler et al., 2022)
Blockley, E. W. and Peterson, K. A.: Improving Met Office seasonal predictions of Arctic sea ice using assimilation of CryoSat-2 thickness, The Cryosphere, 12, 3419–3438, https://doi.org/10.5194/tc-12-3419-2018, 2018.
Fiedler, E. K., Martin, M. J., Blockley, E., Mignac, D., Fournier, N., Ridout, A., Shepherd, A., and Tilling, R.: Assimilation of sea ice thickness derived from CryoSat-2 along-track freeboard measurements into the Met Office's Forecast Ocean Assimilation Model (FOAM), The Cryosphere, 16, 61–85, https://doi.org/10.5194/tc-16-61-2022, 2022.
Line 54: with “the” changing number…
Line 64: such “a” model
Line 70: …has not yet “been” studied extensively…
Line 87: …DA system “consisting” of
Line 96: rephrase to “methods can be found in chapter 7”
Line 105: remove “we”
Line 153: …”the” DA’s ability…
Line 164: …with “a” uniform…
Line 396: effect is subdued check…
Figure 8: “true” and “forecast” blue lines are a bit difficult to differentiate. Can one of the colors be changed?
Line 593: use of “the” ensemble OR use of “an” ensemble
Line 617: …shed “light” (not plural)
Line 642: Is there another reference to add after “Bertino, 2009”? If not, remove the “,“ before the “)”
Citation: https://doi.org/10.5194/egusphere-2023-1809-RC2 - AC2: 'Reply on RC2', Yumeng Chen, 30 Jan 2024
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
241 | 102 | 26 | 369 | 18 | 13 |
- HTML: 241
- PDF: 102
- XML: 26
- Total: 369
- BibTeX: 18
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
Polly Smith
Alberto Carrassi
Ivo Pasmans
Laurent Bertino
Marc Bocquet
Tobias Sebastian Finn
Pierre Rampal
Véronique Dansereau
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1066 KB) - Metadata XML