the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
No critical slowing down in the Atlantic Overturning Circulation in historical CMIP6 simulations
Abstract. The Atlantic Meridional Overturning Circulation (AMOC) is a key component of the Earth’s climate system, and is theorized to have multiple stable states. Critical slowing down (CSD) can detect stability changes in Earth system components, and has been found in sea-surface temperature (SST) based fingerprints of the AMOC. We look for CSD in simulations from 27 models from the sixth Climate Model Intercomparison Project (CMIP6). We calculate three different CSD indicators for the AMOC streamfunction strengths at 26.5° N and 35° N, as well as for a previously suggested AMOC fingerprint based on averaging SSTs in the subpolar gyre region. We find mixed results. Most models do not have a statistically significant sign of CSD, and no model has a conclusive sign of CSD in all ensemble members. However, some models exhibit a number of significant increases in the CSD indicators of the streamfunction strength that is highly unlikely to occur by chance (p<0.05). In addition, the number of significant increases in the AMOC SST fingerprint are as would be expected by random chance. Since we do not know whether or not the AMOC in these models is approaching a critical transition, we cannot deny or confirm the validity of CSD for detecting an upcoming AMOC collapse. However, we can confirm that the AMOC SST fingerprint is not prone to false positives.
- Preprint
(1602 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-1106', Anonymous Referee #1, 15 May 2024
Review of “No critical slowing down in the Atlantic Overturning Circulation in historical CMIP6 simulations”
by Maya Ben-Yami et al.
Submitted to EGUsphere
The use of various ‘Critical Slowing Down’ (CSD) metrics as potential indicators of an approaching climate system tipping point or dynamical bifurcation has received a lot of attention recently. In particular the paper by Boers 2021 (‘B21’ in the present manuscript) showed an apparent increase (towards instability) in one of these CSD metrics as applied to observations of various proxies or ‘fingerprints’ of the AMOC – suggesting that the AMOC may have been getting closer to such a tipping point in recent decades. This paper examines the CMIP6 historical simulations, to see if the simulations bear any imprint of CSD as found in B21. It looks at three alternative CSD metrics, applied to three AMOC variables: the actual overturning strength at two latitudes, and a commonly used, SST-based AMOC fingerprint. Because model data is used the actual overturning timeseries are available as well as the indirect ‘fingerprint’ data, providing a check on the reliability of using the fingerprint as a proxy for the actual AMOC, as well as a check for consistent CSD indications across the three variables. The results are rather equivocal, with a few ensemble members of a few models showing indications of CSD in some, but not all, of the AMOC-related timeseries (as might be expected from random sampling). While it is hard to draw definite conclusions from the results, the authors do note that the SST fingerprint does not appear to show a lot of false positives for CSD, when compared with the actual (modelled) AMOC timeseries. This was not at all obvious a priori, and helps to build evidence that the SST-based fingerprint could be a useful tool in detection/early warning of AMOC tipping
This is a worthwhile study, the analysis has been carefully and thoroughly carried out and I found the presentation generally easy to follow. I liked the approach of looking at several variables as a way to test whether the CSD indicators are showing a consistent physical change. I have a number of fairly minor comments below, but overall I think the paper is worthy of final publication following some minor revisions in response to these.
L 33-36, 65-68, 246-248. I think there is potentially some confusion here between a large weakening of the AMOC, which is what is described in these lines, and crossing a (fold?) bifurcation, which I think is what CSD is detecting. The AMOC could undergo a substantial weakening without crossing any bifurcation point (monostable throughout). Could CSD be used to detect the difference between these two situations? Lines 66-68 suggest that a few models do show a large AMOC weakening (albeit some time into the future). Are these the models that show CSD detection in their historical runs?
Observations. Since this study takes advantage of the ability to use actual overturning from the models, could you also calculate the CSD indicators from the more ‘oceanographically-based’ AMOC estimate of Fraser and Cunningham https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093893 ?
L 199-200. I’m not sure that this situation is so unlikely. We’re looking at a time-varying dynamical system where there is natural internal variation on the same timescale as the forcing, so there will be some natural variability the effective equilibrium structure, which would cause variability in whether/when CSD was detected. Hence I’m not so sure that explanation (a) is less likely than (b). It might be possible to differentiate better between these possibilities through analysis of the multi-century pre-industrial control runs, but that would be another (substantial) study and I’m not suggesting that the authors need to do that in this paper. But in the absence of further evidence it feels to me as if both (a) and (b) are quite likely.
L 235-245. I didn’t really follow this paragraph (although I agree with the last sentence!). Surely (line 242) the evidence for the negative temperature feedback discussed comes from analysis of climate models? There are other negative feedbacks that seem to be stronger than this temperature feedback in some models, e.g. salinity advection by gyre processes, or atmospheric feedbacks (Jackson et al. https://link.springer.com/article/10.1007/s00382-016-3336-8), so, along with the fact that there are other processes driving the sub-polar SST (as noted in the manuscript), I’m not sure this paragraph is a strong argument for the SST index as an indicator of AMOC CSD/tipping.
Citation: https://doi.org/10.5194/egusphere-2024-1106-RC1 - AC1: 'Reply on RC1', Maya Ben Yami, 12 Jun 2024
-
RC2: 'Comment on egusphere-2024-1106', Anonymous Referee #2, 17 May 2024
I am unable to do a review of this paper and I am honestly very surprised to see a positive assessment from Anonymous Referee #1 because the paper lacks the Methods section. Maybe they had access to a different version of the paper than the one that is available online on this page?
The paper goes from the first section “Introduction” to the second section “Results” and the reader has no clue where the methods are. In fact, I was not able to find “Methods” in this paper and I have no idea which model simulations are used here. At some point at line 80 I am told to look at table C1 where the list of CMIP6 models would be found. Regrettably, here too I can’t find what experiments are analyzed here, just a list of ensemble members. Perhaps historical since it’s written in the title?
Here is a partial list of comments, but honestly I wasn’t able to progress much into the paper since I have no idea what data are being analyzed here:
L24: remove “Roughly” from here
Fig. 1: how do you explain the outliers here? The INM models? What does the streamfunction look like for the two models?
Fig. 2: not having a Methods section make this plot unreadable, what am I comparing models with? Which years are used for the observational data? Even without knowing what I am looking at, here there is nevertheless an important flaw: the fitted red slope is by eye influenced by outliers… you should present an analysis of linear fit that is not sensitive to outliers. Again not having methods for me it’s impossible to figure out what the black line and gray bands mean for observations.
Citation: https://doi.org/10.5194/egusphere-2024-1106-RC2 - AC2: 'Reply on RC2', Maya Ben Yami, 12 Jun 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-1106', Anonymous Referee #1, 15 May 2024
Review of “No critical slowing down in the Atlantic Overturning Circulation in historical CMIP6 simulations”
by Maya Ben-Yami et al.
Submitted to EGUsphere
The use of various ‘Critical Slowing Down’ (CSD) metrics as potential indicators of an approaching climate system tipping point or dynamical bifurcation has received a lot of attention recently. In particular the paper by Boers 2021 (‘B21’ in the present manuscript) showed an apparent increase (towards instability) in one of these CSD metrics as applied to observations of various proxies or ‘fingerprints’ of the AMOC – suggesting that the AMOC may have been getting closer to such a tipping point in recent decades. This paper examines the CMIP6 historical simulations, to see if the simulations bear any imprint of CSD as found in B21. It looks at three alternative CSD metrics, applied to three AMOC variables: the actual overturning strength at two latitudes, and a commonly used, SST-based AMOC fingerprint. Because model data is used the actual overturning timeseries are available as well as the indirect ‘fingerprint’ data, providing a check on the reliability of using the fingerprint as a proxy for the actual AMOC, as well as a check for consistent CSD indications across the three variables. The results are rather equivocal, with a few ensemble members of a few models showing indications of CSD in some, but not all, of the AMOC-related timeseries (as might be expected from random sampling). While it is hard to draw definite conclusions from the results, the authors do note that the SST fingerprint does not appear to show a lot of false positives for CSD, when compared with the actual (modelled) AMOC timeseries. This was not at all obvious a priori, and helps to build evidence that the SST-based fingerprint could be a useful tool in detection/early warning of AMOC tipping
This is a worthwhile study, the analysis has been carefully and thoroughly carried out and I found the presentation generally easy to follow. I liked the approach of looking at several variables as a way to test whether the CSD indicators are showing a consistent physical change. I have a number of fairly minor comments below, but overall I think the paper is worthy of final publication following some minor revisions in response to these.
L 33-36, 65-68, 246-248. I think there is potentially some confusion here between a large weakening of the AMOC, which is what is described in these lines, and crossing a (fold?) bifurcation, which I think is what CSD is detecting. The AMOC could undergo a substantial weakening without crossing any bifurcation point (monostable throughout). Could CSD be used to detect the difference between these two situations? Lines 66-68 suggest that a few models do show a large AMOC weakening (albeit some time into the future). Are these the models that show CSD detection in their historical runs?
Observations. Since this study takes advantage of the ability to use actual overturning from the models, could you also calculate the CSD indicators from the more ‘oceanographically-based’ AMOC estimate of Fraser and Cunningham https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2021GL093893 ?
L 199-200. I’m not sure that this situation is so unlikely. We’re looking at a time-varying dynamical system where there is natural internal variation on the same timescale as the forcing, so there will be some natural variability the effective equilibrium structure, which would cause variability in whether/when CSD was detected. Hence I’m not so sure that explanation (a) is less likely than (b). It might be possible to differentiate better between these possibilities through analysis of the multi-century pre-industrial control runs, but that would be another (substantial) study and I’m not suggesting that the authors need to do that in this paper. But in the absence of further evidence it feels to me as if both (a) and (b) are quite likely.
L 235-245. I didn’t really follow this paragraph (although I agree with the last sentence!). Surely (line 242) the evidence for the negative temperature feedback discussed comes from analysis of climate models? There are other negative feedbacks that seem to be stronger than this temperature feedback in some models, e.g. salinity advection by gyre processes, or atmospheric feedbacks (Jackson et al. https://link.springer.com/article/10.1007/s00382-016-3336-8), so, along with the fact that there are other processes driving the sub-polar SST (as noted in the manuscript), I’m not sure this paragraph is a strong argument for the SST index as an indicator of AMOC CSD/tipping.
Citation: https://doi.org/10.5194/egusphere-2024-1106-RC1 - AC1: 'Reply on RC1', Maya Ben Yami, 12 Jun 2024
-
RC2: 'Comment on egusphere-2024-1106', Anonymous Referee #2, 17 May 2024
I am unable to do a review of this paper and I am honestly very surprised to see a positive assessment from Anonymous Referee #1 because the paper lacks the Methods section. Maybe they had access to a different version of the paper than the one that is available online on this page?
The paper goes from the first section “Introduction” to the second section “Results” and the reader has no clue where the methods are. In fact, I was not able to find “Methods” in this paper and I have no idea which model simulations are used here. At some point at line 80 I am told to look at table C1 where the list of CMIP6 models would be found. Regrettably, here too I can’t find what experiments are analyzed here, just a list of ensemble members. Perhaps historical since it’s written in the title?
Here is a partial list of comments, but honestly I wasn’t able to progress much into the paper since I have no idea what data are being analyzed here:
L24: remove “Roughly” from here
Fig. 1: how do you explain the outliers here? The INM models? What does the streamfunction look like for the two models?
Fig. 2: not having a Methods section make this plot unreadable, what am I comparing models with? Which years are used for the observational data? Even without knowing what I am looking at, here there is nevertheless an important flaw: the fitted red slope is by eye influenced by outliers… you should present an analysis of linear fit that is not sensitive to outliers. Again not having methods for me it’s impossible to figure out what the black line and gray bands mean for observations.
Citation: https://doi.org/10.5194/egusphere-2024-1106-RC2 - AC2: 'Reply on RC2', Maya Ben Yami, 12 Jun 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
363 | 162 | 28 | 553 | 20 | 17 |
- HTML: 363
- PDF: 162
- XML: 28
- Total: 553
- BibTeX: 20
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1