the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Constraining local ocean dynamic sea level projections using observations
Abstract. The redistribution of ocean water volume under ocean-atmosphere dynamical processes results in sea level changes. This process, called Ocean Dynamic Sea Level (ODSL) change, is expected to be one of the main contributors to sea level rise along the western European coast in the coming decades. State-of-the-art climate model ensembles are used to make 21st century projections for this process, but there is a large model spread. Here, we use the Netherlands as a case study and show that ODSL rate of change for the period 1993–2021 correlates significantly with ODSL anomaly at the end of the century and can therefore be used to constrain projections. Given the difficulty to estimate ODSL changes from observations on the continental shelf, we use three different methods providing seven observational estimates. Despite the broad range of observational estimates, we find that half of CMIP6 models have rates above and one below the observational range. We consider the results of those models as implausible and compare projections of ODSL with all models and the plausible selection. The difference is largest for the low emission scenario SSP1-2.6 for which the median and 83rd percentiles are reduced by about 25 % when the plausible selection is used. This method results in reduced uncertainty in sea-level projections. Additionally, having projections that are compatible with the observational record increases trust in their century-scale accuracy. We argue that this model selection is better than using all models to provide sea level projections suited to local users in the Netherlands and that the same method can be used elsewhere.
- Preprint
(2226 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
CC1: 'Comment on egusphere-2024-2872', Tim Hermans, 26 Sep 2024
This paper presents an interesting and relevant exercise in model weighting for projecting ocean dynamic sea-level change. The results are well presented and convincing, especially because of the significant correlation between historical and future ocean dynamic sea-level change. The correlations are not that high, though, which makes me wonder how much the historical rates in ocean dynamic sea-level change in the CMIP6 models are influenced by internal climate variability and if models are not now being excluded partially simply because their variability is not in phase with the variability in the observations.
This could (and in my opinion, should) be checked by analyzing multiple initial-condition members, which are available for at least a few CMIP6 models. Alternatively, if this is not feasible for the authors, the influence of internal variability could also be checked by computing rates in windows with the same length in the (de-drifted) pre-industrial control run of each model. The magnitude of the variance in the historical or pre-industrial rates due to internal variability alone would provide important evidence of the robustness of the method that the authors propose.
Some other, more minor suggestions:
- L36-37: I would avoid the term 'mitigate' in this context and suggest to rewrite the sentence along the lines of: "so ODSL trends will be positive in some regions and negative in others"
- L114-115: and on the assumption that the drift is linear, have you checked that?
- L118: for clarity, could you perhaps explain why the bilinear regridding does not work for all models?
- Fig3 caption: would be helpful to repeat how the uncertainty in the historical rates of the CMIP6 models is derived
Citation: https://doi.org/10.5194/egusphere-2024-2872-CC1 -
RC1: 'Comment on egusphere-2024-2872', Anonymous Referee #1, 27 Oct 2024
The study is motivated by the ocean dynamic sea level (ODSL) change being a major contribution to future coastal sea-level change. Yet, there’s a large spread in CMIP projections of this component. The authors present a method to reduce the spread by constraining ODSL changes by the end of the century using observations. They focus on the region off the Dutch coast – the southern part of the relatively shallow North Sea. The authors seek a across-model relationship between ODSL rates over various periods with the ODSL change by the end of the century to identify the historical period that shows most potential to constrain projections with observations.
They then use three different, largely independent, set of observations giving a range of present-day (1993-2021) rates of ODSL change and select models that fall in this range to constrain ODSL projections. Using this approach, they find that the ODSL contribution to sea-level rise using all CMIP models might be overestimated.
The paper is well written, the methods are clear, and the results well presented. The authors discuss a serious of potential, plausible reasons for the overestimation of “conventional” ODSL projections but also acknowledge the limitations of their approach. Those are, in my opinion, to a large degree related to the importance of internal variability at relatively small spatial scales and short-ish (<30 yrs) time scales as well as a lack of understanding the mechanisms that lead to ODSL changes on shallow continental shelves. The authors have discussed the latter and shown that the observational record may be just long enough to reduce the effect of internal variability on their results (although this could be stated more clearly, see below).
General comments
The authors acknowledge that the presence of internal variability impacts the results. From Figure 2, it seems like forced rates take over for periods ending after ca 2015 (CMIP5) and 2010 (CMIP6) respectively. Yet at periods up to 30 years (approx. the length of the observational record used here) there is still a potential contribution from internal variability (at least in CMIP6) as shown from the correlations with periods ending 2020-2030. Could the authors comment on how that affects their results? Is there a way to quantify this effect, using for example control simulations to quantify the strength of internal variability, for example using the control simulations?
I’m also curious about the strong negative correlations in the CMIP5 models for short periods (<20-25 years) ending around 2000. Are they significant? If so, what is the explanation.
Though not the focus of this study I noticed the large range the observational estimates cover. Some of them don’t even seem to overlap. I think this could be discussed a bit more.
Minor comments
Decide whether you want to hyphenate sea level whenever it precedes a noun (e.g. sea-level projections, sea-level variability etc).
ODSL from CMIP models, post-processing: Should you not first remove the drift from the models and then remove the global mean?
Line 114: “remove it to the …” -> remove it from the….
Line 120: Out of curiosity: why did you remove the influence of the wind only for CMIP6 and not CMIP5 models?
Figure 2: Are all the correlations shown statistically significant? If not, could you show in the plot which correlations are significant?
Paragraph starting Line 168: I think there is quite a bit of repetition here. You explained already how you arrived at the observational estimates in Data and Methods. Thus, this paragraph can be shortened.
Figure 3, x-labels: In the caption and in Figure 1, Nordic Seas are abbreviated NS. In the x-labels they appear as NWS?
Line 203: I count 13 models that overlap with the observational range and 12 that have a too high rate. Please check.
Line 277 Nort Sea -> North Sea
Citation: https://doi.org/10.5194/egusphere-2024-2872-RC1 -
RC2: 'Comment on egusphere-2024-2872', Anonymous Referee #2, 17 Nov 2024
I have read the manuscript “Constraining local ocean dynamic sea level projections using observations” by Dewi Le Bars et al., submitted for publication to Ocean Science. The manuscript deals with the important issue of model selection for more reliable future climate projections in the context of regional sea level, with a focus on the coast of the Netherlands in the North Sea. The authors begin by identifying, within CMIP5 and CMIP simulations, time periods in which a statistical relationship (i.e., a correlation) exists between past rates of dynamic sea level (DSL) and projected changes for 2090-2099. They identify 1993-2021 as one of such periods and use it as a reference period for selecting and discarding models based on the degree of agreement between observed and simulated rates of DSL in this period. Following this, they produce new DSL projections based on the selected models and show that the model spread is greatly reduced in their new projections.
I do think that coming up with smart and justified ways of selecting climate models is an important component of current efforts to increase our confidence in future projections, and this study represents a contribution towards achieving this goal. Overall, I think this study likely has the potential to be suitable for publication in Ocean Science. The topic at hand is timely and relevant, the paper is well-written, and the results are adequately presented. My main criticism is that the robustness of the criteria used for model selection is not sufficiently tested and there is little critical discussion on the adequacy of choosing climate models only based on their agreement with observations. Without this robustness analysis and discussion, the claim made in the abstract that “this model selection is better than using all models to provide sea level projections” is not convincing enough, in my view. Below I expand on my concerns and provide suggestions with the hope that they will be useful to the authors in revising the paper.
Main concerns:
1. Robustness analysis. While I agree that there is no definitive way to prove that the new projections are more accurate than the full ensemble, there are several analyses that can provide clues on the robustness of the results and the adequacy of the approach taken here. I summarize some of those below:
The choice of the period 1993-2021 for the selection of the models is somehow arbitrary. How sensitive are the results to the choice of the period? Does the set of selected models change if you use the period 2005-2019 (15 years) instead of 1993-2021? What about if you use the period 1992-2011 (20 years)? All those periods seem to be just as valid as 1993-2021 based on the correlations shown in Figure 2.
To what extent do the observed rates of DSL change in 1993-2021 reflect internal climate variability? This is only a 29-year period and thus internal variability might have very well played an important role. If this were the case, then choosing the models that best match observations might not be the best strategy. Could the authors explore this issue, for example, using an initial condition ensemble under historical forcing to see how large the influence of internal variability is?
To estimate the steric sea-level change from hydrographic data the authors vertically integrate the density of sea water from the sea surface down to 2000 m. They refer to Bingham and Hughes (2012) to justify this choice, however, in that paper it was shown that the assumption of no horizontal gradients in sea level (essentially what is assumed here) only works well in the absence of boundary currents. Wouldn’t calculating the steric height closer to the coast, for example along the 500-m isobath, be more adequate given the presence of the Norwegian Current? How sensitive is the steric calculation to the depth used in the integration?
2. Adequacy of the selection criteria. I find Figure 1c to be both interesting and somehow concerning. The ensemble spread is quite narrow during the whole observational period (it is very narrow in the period 1990-2005) and starts growing rapidly in the unobserved period. To me, this behavior indicates that: 1) CMIP models have been strongly tuned to reproduce the observed climate; 2) this tuning process leads to a set of model parameter values that differs from one model to another. That is, different parameter values can lead to models that all agree well with the observed DSL rates, but they produce different projections, hence the large ensemble spread in the future. This seems to cast doubts on the validity of the strategy of selecting models in terms of how well they agree with the observations. Under this strategy, the more “tunned” models will always be selected, but they are not guaranteed to produce the most credible projections (given the parameter degeneracy). There is also the issue of how well models should agree with observations, given the large uncertainty surrounding observational rates of DSL and the influence of internal climate variability on the rates. I think that all these issues should be considered in any model selection strategy. At the very least, they should be discussed in some detail. In Section 6, the authors provide a nice discussion on the limitations of the CMIP models, but this discussion focuses primarily on processes that are missing from (or not modeled by) the models rather than on the issues that more directly affect the validity of the model selection criteria.
Specific comments:
Lines 53-54. A brief explanation of the method of emergent constraints would be helpful here.
Ocean Reanalysis. Have sea-level trends from SODA been validated in previous studies? If so, I would suggest including a reference here.
Lines 107-108. What data is used to remove wind influences?
Lines 120-122. Internal climate variability can be also due to remote forcing, besides local winds. Have the authors considered this?
From Figure 2, only periods ending in or after about 2010 show a significant correlation between past rates and future changes. The drop in correlation before and after this year is not gradual but very sharp. Physically, why should rates in the period 1993-2010 be predictive of future changes but not rates in the period 1993-2008? This is a bit concerning. Does the cut-off year coincide with the last year of the historical simulations in CMIP5 and CMIP6 models? Could you please comment on this?
Lines 203-205. What is the correlation between past rates and future changes in the 12 selected models?
Lines 253. “To be about…”?
Citation: https://doi.org/10.5194/egusphere-2024-2872-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
160 | 41 | 194 | 395 | 4 | 3 |
- HTML: 160
- PDF: 41
- XML: 194
- Total: 395
- BibTeX: 4
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1