the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Simulated and Observed Transport Estimates Across the Overturning in the Subpolar North Atlantic Program (OSNAP) Section
Abstract. A comparison of simulated and observed overturning transports and related properties across the Overturning in the Subpolar North Atlantic Program (OSNAP) sections for the 2014–2022 period is presented, considering both depth and density space transports. The effort was motivated by the observational transport estimates at both OSNAP-West (OW) and OSNAP-East (OE) sections which show a minor role for the Labrador Sea (LS) in setting the mean and variability of the overturning in the subpolar North Atlantic. There are 9 participating groups from around the world, contributing a total of 18 ocean – sea simulations with 6 different ocean models. The simulations use a common set of interannually-varying atmospheric forcing datasets. The horizontal resolutions of the simulations range from nominal 1° to eddy-resolving resolutions of 0.1°–0.05°. While there are many differences between the simulations and observations as well as among the individual simulations in terms of transport properties, the simulations show significantly larger transports at OE than at OW in general agreement with the observations. Analyzing overturning circulations in both depth and density space together provides a more complete picture of the overturning properties and features. This analysis also reveals that, in both the simulations and observations, northward and southward flows substantially cancel each other, producing much smaller residual (total) transports. Such cancellations tend to be much more prominent in depth space than in density space. In general, the observed transport features are captured better at OE than OW. The simulations generally show larger (smaller) transports with positive (negative) temperature and salinity biases in the upper ocean near the OSNAP sections, but with no such relationship with density biases. In high-resolution simulations, the transport profiles agree better with the observations in general, but challenges remain in some other metrics considered in our analysis. When transports are calculated using a density referenced to 2000-m depth, rather than the ocean surface, the relative contributions of transports at OW increase modestly.
- Preprint
(15728 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- CC1: 'Comment on egusphere-2025-5406', Alexander Shchepetkin, 14 Nov 2025
-
CEC1: 'Comment on egusphere-2025-5406 - No compliance with the policy of the journal', Juan Antonio Añel, 07 Dec 2025
Dear authors,
Unfortunately, after checking your manuscript, it has come to our attention that it does not comply with our "Code and Data Policy".
https://www.geoscientific-model-development.net/policies/code_and_data_policy.htmlYou have archived your code on GitHub. However, GitHub is not a suitable repository for scientific publication. GitHub itself instructs authors to use other long-term archival and publishing alternatives, such as Zenodo. Also, for the datasets used in your manuscript, you have not stored them in appropriate repositories, but simply provide links to sites that do not comply with the requirements of our policy. Therefore, the current situation with your manuscript is irregular, as it should have never been accepted for Discussions or send out for peer-review given such lack of compliance.
Please, publish your code and data in one of the appropriate repositories and reply to this comment with the relevant information (link and a permanent identifier for it (e.g. DOI)) as soon as possible, as we can not accept manuscripts in Discussions that do not comply with our policy.
I must note that if you do not fix this problem, we cannot continue with the peer-review process or accept your manuscript for publication in our journal.
Juan A. Añel
Geosci. Model Dev. Executive EditorCitation: https://doi.org/10.5194/egusphere-2025-5406-CEC1 -
AC1: 'Reply on CEC1', Gokhan Danabasoglu, 09 Dec 2025
Dear Juan,
Thank you for your comment. First, we would like to assure you that it is our intention to comply with the journal’s code and data policy fully.
We now have a DOI for METRIC on Zenodo at https://doi.org/10.5281/zenodo.17858479. This will be included in the manuscript.
The python code for the TEOS-10 Equation of State is not ours. Therefore, we followed the citation instructions provided by the developers of the code which asks the users to site McDougall and Barker (2011) as we have done here. We note that there is no DOI for this package, but we provided the relevant website at https://github.com/TEOS-10/GSW-Python. This repository contains useful information about TEOS-10 (Thermodynamic Equation of Seawater - 2010), the international standard for the description of the thermodynamic properties of seawater. Because this widely used software package is not ours, we do not think that it is up to us to provide a DOI for it.
We are a bit confused by your comments about the datasets as we thought we followed the guidelines as well as what has been done in manuscripts published in GMD. The guidelines state that “ ….. data as developed in the paper must be archived.” Following this guidance, all the simulation datasets used in our analysis region had been made already available at https://doi.org/10.5281/zenodo.17858479 in compliance with the GMD’s data policy. We note that the entire global datasets for the full integration lengths are too large (several hundreds of terabytes) to be archived on Zenodo. While these full datasets are not used in our analysis, we had provided additional information regarding from where some of these datasets could be accessed, including the Earth System Grid Federation (ESGF) and institutional repositories. Again, this is compliant with the GMD’s data policy which allows this information as long as the actual data used in our analysis are made available. Our intention here was simply to help the readers who may want to use these simulations for other analysis. We can certainly delete this additional information. Again, all the data needed to reproduce the analysis presented in our manuscript have been available at https://doi.org/10.5281/zenodo.17858479.
We finally note that the OSNAP observations are available at Georgia Tech Digital Repository at https://doi.org/10.35090/gatech/78023.
Please let us know if our understanding of and compliance with the GMD’s code and data policy need further changes.
Best,
Gokhan Danabasoglu and Fred Castruccio
Citation: https://doi.org/10.5194/egusphere-2025-5406-AC1 -
CEC2: 'Reply on AC1', Juan Antonio Añel, 10 Dec 2025
Dear authors,
Regarding the code for the TEOS-10 equation, it does not matter at all that you are not the developers. Such code is in GitHub under a BSD license, and therefore, you have the right to copy and redistribute it, provided you retain the same license. Therefore, please, proceed to create a repository that we can accept to store it, and reply to this comment with the information for it.
Regarding the OSNAP observations, as we can not accept the Georgia Tech Library to store the assets, and the dataset is under the US Copyright law, please, store it in Zenodo private repository. In this way you comply with the restrictions for distribution, and at the same time we are sure that the dataset is securely stored.
Regarding the outputs from models, we do not need that you store full output files, but the exact data you use, the variables for the specific region. It is unclear if this is what you mean by "The simulation datasets for our analysis region", please, clarify it. If the Zenodo repository for the mentioned analysis region does not contain all of the data, you could add it. Probably the size needed to store the specific data that you use is not of several Terabytes.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/egusphere-2025-5406-CEC2 -
AC2: 'Reply on CEC2', Gokhan Danabasoglu, 12 Dec 2025
Dear Juan,
Thank you for your second comment.
Regarding the outputs from models, we had indicated that “all the data needed to produce the analysis presented in our manuscript have been available at https://doi.org/10.5281/zenodo.17858479” in the last sentence of the related paragraph in our previous reply. So, we believe that we had been clear and compliant with the GMD data policy.
Regarding OSNAP data and TEOS-10 software packages, we believe that there are moral and ethical considerations that are not reflected in the GMD policies. While acknowledging what is permissible under licensing agreements and providing a DOI does not imply ownership of a dataset or software, as a community we should abide by certain standards, considerations, and principles beyond just binary policy implications. With this background, we have reached out to the leaders of OSNAP and TEOS-10 efforts, seeking their guidance. Both groups indicated to us that we need to follow their recommended citation protocols and not create any duplicate DOI or duplicate code repositories. Please see the respective specifics below.
For OSNAP, the program website provides clear guidance on how to cite the data (https://www.o-snap.org/data-access/), including an official and permanent DOI (https://doi.org/10.35090/gatech/78023). The data are freely available and can be downloaded directly from that link.
For TEOS-10, we have been reminded by the leads of the project that:
“On 1st January 2010 TEOS-10 became the global standard for what seawater is, what to evaluate its thermodynamic properties, and how to measure it.”
“Since TEOS-10 is the global definition of seawater, if a reference is needed, then a reference to the TEOS-10 Manual, or Getting Started, or to the TEOS-10 web site is sufficient.”
“IOC, SCOR, IAPSO and IUGG all officially blessed TEOS-10 in 2009 and 2010, and they did not bless any DOI sites. The officially blessed things are the TEOS-10 Manual, the Getting Started document, and the software on the TEOS-10 software web page. That is all. Nothing else has received a blessing, so nothing else is actually TEOS-10.”
Furthermore:
“This [the TEOS-10 website] site is the official source of information about the Thermodynamic Equation Of Seawater - 2010 (TEOS-10), and the way in which it should be used.”
So, any DOI creation would go against the official OSNAP and TEOS-10 policies which are aimed at preventing proliferation of TEOS-10 packages which also defeats the purpose of DOIs as unique identifiers. Please also note that how to cite and where to store these datasets and packages are under the governance of various international bodies and projects, including their non-duplication at other sites.
Following this input and your comments, we also searched EGU publications, including those in GMD, regarding how they have cited both OSNAP and TEOS-10. Many manuscripts use these, especially TEOS-10, but none seems to have a DOI on Zenodo for these. We just provide three recent examples here:
Mignac et al. (2025, GMD, doi: 10.5194/gmd-18-3405-2025) cites the OSNAP website as the data source.
Couplet et al. (2025, GMD, doi: 10.5194/gmd-18-3081-2025) uses the same TEOS-10 packages, but they do not include them in their code availability section. They cite McDougall and Barker (2011) following the TEOS-10 guidelines.
Allende et al. (2024, GMD, doi: 10.5194/gmd-17-7445-2024) uses the TEOS-10 packages, but they do not include them in their code availability section. They cite McDougall and Barker (2011) following the TEOS-10 guidelines.
Indeed, there are other manuscripts in GMD with the same citation approaches as in the above examples.
In our case, we wanted to be as transparent as possible and included this information in the code availability section of our manuscript which appears to be not consistent with what is being done in other GMD publications. Perhaps this was our mistake.
So, your comments and requests are putting us between a rock and a hard place, considering community moral and ethical implications and existing precedence in published manuscripts in GMD. Again, noting that what you are asking might be perfectly fine legally, but it makes us rather uncomfortable and goes against the citation protocols laid down by the data and code creators and approved by the international decision makers.
In light of the above considerations and given the existing precedence in published manuscripts in GMD, we hope that you will agree that the official OSNAP DOI is an acceptable way to acknowledge and cite the OSNAP data in GMD. For TEOS-10 the precedence set in GMD seems to be not to include any TEOS-10 information in the code availability section. We propose to remove the link to the GitHub repository from the “code and data availability” and add the link to the official TEOS-10 website (https://www.teos-10.org/) in the main text along with the McDougall and Barker (2011) reference as requested by the GSW Oceanographic Toolbox developers. Again, please note that the TEOS-10 website (https://www.teos-10.org/) is the main sanctioned source of information about TEOS-10 and the way in which it should be used. It includes links to download the various official TEOS-10 software packages, including the python version we relied on for our analysis, and instructions on how to use those packages. It is by far the best source of information for anyone who wants to use TEOS-10.
Just as you are, our aim is to be fully transparent and provide all the datasets and software packages used in our study. We have accomplished this in our manuscript, also considering ocean science and modeling communities’ guidelines. After all, we are all in science together as a community.
Best,
Gokhan Danabasoglu and Fred Castruccio
Citation: https://doi.org/10.5194/egusphere-2025-5406-AC2 -
CEC3: 'Reply on AC2', Juan Antonio Añel, 12 Dec 2025
Dear authors,
I appreaciate your reflection. However, what is wrong are these ill-perceived "ethical" or "moral" rules, which actually have nothing to do with ethics or moral. It is not unethical to copy and redistribute something for which the authors themselves have granted the permission in the license. GMD has a policy, and we are obligated to comply with it. The fact that other authors in the past, or published papers, have failed to comply with the requirements of the journal, whatever the reason, it does not justify non-compliance in this case. Therefore, we must insist that you comply with the requirements and store the required items in a repository acceptable according to the journal's policy.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/egusphere-2025-5406-CEC3
-
CEC3: 'Reply on AC2', Juan Antonio Añel, 12 Dec 2025
-
AC2: 'Reply on CEC2', Gokhan Danabasoglu, 12 Dec 2025
-
CEC2: 'Reply on AC1', Juan Antonio Añel, 10 Dec 2025
-
AC1: 'Reply on CEC1', Gokhan Danabasoglu, 09 Dec 2025
-
CC2: 'Comment on egusphere-2025-5406', Stephen Griffies, 14 Dec 2025
I have followed this exchange concerning the DOI for the OSNAP data and TEOS-10 code.
Both OSNAP and TEOS-10 are well established and sanctioned international projects. So long as the authors provide the specific links to the data and code, along with any necessary versioning information, that is sufficient for any interested reader to know precisely what was used, thus serving the ideals of open and reproducible research. Asking authors to, also, copy OSNAP and TEOS-10 onto their own Zenodo site with a DOI will proliferate data/software in a manner that is not sanctioned by those who developed the observations or software.
I agree that there is nothing illegal with producing a DOI of someone else's work, so long as the work has proper copyrights. But the question is whether doing so serves the ideals of open and reproducible science in a manner that does not present any perception of spuriously moving ownership. In my assessment, it confuses and so does not serve the ideals of open research.
Although it is quite possible I am missing something, this particular case seems to be one where remaining strict to the verbiage of a journal's rule leads to obfuscation and potential for confusion. I thus support what the authors propose.
I offer one somewhat related example.
Citation: https://doi.org/10.5194/egusphere-2025-5406-CC2 -
CC3: 'Comment on egusphere-2025-5406', Trevor McDougall, 15 Dec 2025
I strongly discourage the proliferation of the TEOS-10 source code. That is, the TEOS-10 algorithms SHOULD NOT be reproduced in a DOI. TEOS-10 is the officially adopted description of Seawater, Ice-Ih and humid air. It has been endorsed by IAPSO, SCOR, IAPWS and IUGG, the four key relevant international bodies, and formally adopted by the Intergovernmental Oceanographic Commission at it Assembly in mid 2009 after every country of the United Nations was asked to comment on the draft TEOS-10 release documents. TEOS-10 became the internationally endorsed definition of seawater on 1st January 2010. I chaired the Working Group 127 of SCOR/IAPSO that developed TEOS-10.
When TEOS-10 was announced to the world there were official announcements in several oceanographic journals recommending how its adoption should proceed. An important aspect of these announcements was that the computer algorithms of TEOS-10 should be sourced from one location, namely from the web page https://www.teos-10.org/software.htm . The point of this recommendation is to reduce the possibility of incorrect code infiltrating the community. This is the recommendation of the Intergovernmental Oceanographic Commission (IOC).
I have published a paper in GMD that used TEOS-10 and was not asked to duplicate the TEOS-10 computer code, see https://gmd.copernicus.org/articles/14/6445/2021/gmd-14-6445-2021.pdf . There are thousands of papers that have been published in many journals (including journals in the Copernicus stable, such as Ocean Science) that have not included a DOI that duplicates the code of TEOS-10. For example, there are 1,500 citations to the Getting Started book, https://www.teos-10.org/pubs/gsw/v3_04/pdf/Getting_Started.pdf , but as far as I am aware, none of these papers have duplicated the TEOS-10 code. Duplicating this code is a risky practice and is not recommended.
Here is one of the announcements of TEOS-10, published in Deep-Sea Research https://www.teos-10.org/pubs/Announcement_Deep_Sea_Research.pdf .
Citation: https://doi.org/10.5194/egusphere-2025-5406-CC3 -
CEC6: 'Citations and code archival', David Ham, 22 Dec 2025
It is important to distinguish here between giving proper credit to the TEOS-10 algorithm, by citing MacDougall and Barker (2011) and identifying the precise version of code used in this paper. Neither citing that paper nor referencing https://www.teos-10.org/software.htm can do that. If it is the preferred practice in the relevant community that that paper is the definitive citation and that that is the preferred download location then of course both of those things should appear in the paper.The issue here is that this is not sufficient for the reader to understand exactly which version of which software was used. Even a reference to the git repository in question, which is only one of a number of different implementations of TEOS-10 presented on the website, does not identify the particular version used. Further, GitHub is not a suitable archive location. I note, for example that, that development of the GSW-Python package has only been in that Git repository since 2017, much more recently than 2011, and that development could move elsewhere if, for example, GitHub change their policies in a manner inconvenient for the project. Git commit hashes can also become invalid if the project history is rewritten, for example to remove code included in breach of copyright.It is for this reason that GitHub themselves publish a mechanism for making citable archived copies of particular versions of repositories on Zenodo (https://docs.github.com/en/repositories/archiving-a-github-repository/referencing-and-citing-content#issuing-a-persistent-identifier-for-your-repository-with-zenodo).It is apparent that the authors of GSW-Python are aware of the GitHub-Zenodo linkage, because they have used it (Firing et al. 2021). It is surprising to hear that there is felt to be an in-principle objection to an archival mechanism that the authors themselves have used and, indeed link to from the README file in their own repository (https://github.com/TEOS-10/GSW-Python/blob/main/README.md). Indeed, if anything the somewhat problematic issue here is that by not issuing Zenodo records for releases since 2021 the package authors may induce a user to cite the wrong version of their software.Prof. McDougall’s objection to using this mechanism is that creating copies of the software violates the principle of single source of truth. However, computers work by copying. Avoiding copies, even publicly accessible copies, of open source software is simply impossible. Indeed, it is straightforward to find several sources of the code in question:
-
The Conda package (https://anaconda.org/channels/conda-forge/packages/gsw/overview) which is published by the package authors and listed as the preferred installation route on the TEOS website.
-
The Python package on PyPI (https://pypi.org/project/gsw/), likewise published by the package authors.
-
The Ubuntu package (https://launchpad.net/ubuntu/+source/gsw) packaged by the Debian Science team.
-
The Fedora package (https://packages.fedoraproject.org/pkgs/python-gsw/python3-gsw/) maintained by the pseudonymous qulogic.
-
34 Forks of the GitHub package.
-
Public proliferation of copies is normal and inevitable for this sort of important open-source software. It is instead the traceability of copies that ensures that single source of truth is maintained.What is being asked for here is that a reference is provided to a persistent copy of the exact version used in this work. If this is done via Zenodo then this can refer directly back to the current version of the software on GitHub, as all the routes above do. Further, this is something that the package authors have done in the past. In the light of the above, I am at a loss to understand why this is considered problematic.As the latter 3 examples illustrate, there isn’t really a distinction between this archiving being done by the package authors or by a third party such as the paper authors. The examples further illustrate that there is no convention or practice that third parties do not publish copies of this sort of software.Finally, I note that several comments indicate that previous papers have been published without a precise citation, including in GMD. This simply reflects that publishing full provenance information for geoscientific modelling papers is an evolving process. Over time we have got better at understanding what can and should be done, and the effectiveness of enforcement of the policies and principles has improved. That sometimes means that past practice is no longer considered best practice today.My understanding of the best practice in this situation would be for the precise version of the software to be archived in a long-term archive and referred to via a persistent identifier such as a DOI. This appears to me straightforward and not in conflict with the past practice surrounding this package.ReferencesEric Firing, Filipe, Andrew Barna, & Ryan Abernathey. (2021). TEOS-10/GSW-Python: v3.4.1.post0 (v3.4.1.post0). Zenodo. https://doi.org/10.5281/zenodo.5214122McDougall, T.J. and P.M. Barker, 2011: Getting started with TEOS-10 and the Gibbs Seawater (GSW) Oceanographic Toolbox, 28pp., SCOR/IAPSO WG127, ISBN 978-0-646-55621-5.Citation: https://doi.org/10.5194/egusphere-2025-5406-CEC6 -
-
CEC6: 'Citations and code archival', David Ham, 22 Dec 2025
-
CC4: 'Comment on egusphere-2025-5406', Susan Lozier, 17 Dec 2025
Speaking on behalf of the OSNAP International steering committee, I am opposed to any proliferation of DOIs for the OSNAP dataset for the same reasons that Trevor MacDougall has articulated for TEOS-10. The OSNAP data are publicly available and can be freely downloaded from the official DOI (https://doi.org/10.35090/gatech/78023). Any such reproduction is bound to cause confusion. To move forward, a clearer explanation of how the current DOI violates GMD policy would be helpful.
Citation: https://doi.org/10.5194/egusphere-2025-5406-CC4 -
CEC4: 'Reply on CC4', Juan Antonio Añel, 17 Dec 2025
Dear authors,
The problem with storing assets in the Georgia Tech Library services has nothing to do with the DOI, but with the lack of a long-term preservation and data removal policy. We request evidence of funding secured or commitment to operate and preservation of the assets for, usually, 15-20 years, 10 years in some exceptional cases. Also, we request the existence of a clear policy where once the assets are deposited, it is ensured that they will not be removed except in exceptional cases (illegal content or similar) and that authors can not modify or delete the deposited content. Unfortunately, the Georgia Tech Library, to the best of our knowledge and given the documentation published, does not comply with the mentioned requirements.
However, I want to note that this is not the only outstanding issue with your manuscript. We refer you here to my previous comment, and ask you to address all the mentioned outstanding issues. For example, we can not accept GitHub to store your code, and despite any preferences you have regarding republication of code and data, all the manuscripts submitted to GMD must comply with the policy of the journal. Regarding your worries about republication, I should note that creating and publishing new copies of TEOS-10, is not under your control, as it is under the BSD license. Anyone can copy and publish it elsewhere. This need for ability for republication extends to mostly all the assets necessary to replicate a manuscript submitted to GMD, as a license that allows redistribution is necessary to be able to access and verify the code and data for a submitted manuscript.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/egusphere-2025-5406-CEC4 -
CEC5: 'Reply on CEC4', David Ham, 22 Dec 2025
Dear all,
Having looked at the policies posted on the Georga Tech library repository, I can confirm that this is a suitable data repository and hence the archiving of this data is compliant with GMD policy. I will respond on the TEOS code issue separately.
Sincerely,
David
Citation: https://doi.org/10.5194/egusphere-2025-5406-CEC5
-
CEC5: 'Reply on CEC4', David Ham, 22 Dec 2025
-
CEC4: 'Reply on CC4', Juan Antonio Añel, 17 Dec 2025
-
RC1: 'Comment on egusphere-2025-5406', Anonymous Referee #1, 23 Dec 2025
Thank you for your document “Simulated and Observed Transport Estimates across the Overturning in the Subpolar North Atlantic (OSNAP) Section”. It is a very useful contribution to our understanding and obviously contains a lot of analysis. My recommendations are mostly about exploring some relationships a bit more, putting results into context with some other studies and improving the clarity of the results. My suggestion is for major revisions – I do not think the revisions themselves are particularly serious, but am aware that they might take some time.
Major
- There are a couple of places where the authors make the point that we don’t know that OW dominates OE for decadal and longer variability (L243, final sentence). This is true, but an obvious question is what do the models show? Do any/many of the models suggest that stronger variability can be seen at OW? Is there a relationship between decadal variability at OW and shorter timescale variability/mean state which might let us infer what the observations might show?
- You split your models into high and low resolution, but it seems to me that there are 3 groups of about 6 models each – low resolution (>0.25 degrees), medium resolution (~0.25 degrees), and high resolution (<0.25 degrees). Does this split change anything about your conclusions of resolution?
- Your timeseries plots are difficult to read because it’s difficult to tell some of the colors apart. Some possible options: use different line styles as well as colors; split the models into low, medium and high resolution, with legends for each separately.
- The paper is quite long, however there is quite a lot of space taken up by description of plots, such as values for particular models etc. This might be important to highlight in some cases, but some parts of the text feel like a long description of the figure. Ideally this information could be read off the plot for someone interested, and more space could be used to discuss the more interesting findings such as: common model biases; differences with resolution; relationships between variables; possible causes and implications of specific differences (eg the authors do discuss the implications of having a better overflow representation in some models); how results fit with other findings. This is done sometimes, but not always. I’ve made some suggestions in the points below.
Minor
L52 Remove ‘general’ – I’m not sure what it means here.
L75 These reviews are not recent anymore!
L88 comma after weakening
L123 I think I’m correct in saying that fluxes may also be important over the Irminger sea as well as the Lab sea?
L125-131 The text implies that all models show a link between low frequency AMOC variability and the Labrador sea which is not true, certainly in coupled models - eg see https://doi.org/10.1007/s00382-023-07069-y
L200 I’m surprised that the authors do not also include a discussion of Yeager et al https://doi.org/10.1126/sciadv.abh3592 , which has a different mechanism for the interaction between the regions
L221 conclude that coupled models
L276 It’s not clear to me why MOM5 and 6 are listed as different models, but the NEMO versions are not.
L286 Define z* or reference something to explain what it is
L378 I think the dz/dsigma should be inside the dx since the thickness is normally integrated over x as well.
L405 You quote densities as eg 24 kg/m3 rather than 1024 kg/m3 – you need to make this point somewhere.
L465 It looks to me that the black line is plotting the density of the maximum transport rather than a timeseries of the maximum transport itself.
L482 Do you mean annual mean of the maximum over depth of the monthly mean transports? It’s a bit long, but at the moment it isn’t clear what the maximum and average are taken over
L497 Is the weakening trend at OE also seen in most models? How does the overall trend compare to other studies eg https://doi.org/10.1038/s43017-022-00263-2 and references therein
L558 The reference to ‘high densities’ seems to suggest relatively high densities for what we are interested in, however it looks like you’re referring to densities where the transport is maximum here. Suggest rewording
L565 This suggest that in the observations the northwards transports have a narrower range of densities that the models. Is there anything which can be said about why that might be?
L632 What are the deeper/denser levels? Can you say anything about why this might be and the implications?
L650 This isn’t obvious to see from the plots – it looks like the scale is saturated for some plots. Could you do a quantitative comparison, eg comparing the mean of the north/south flows?
L654 maximum aggregated transports
L705 Is there also a relationship of mean strength or monthly variability with annual/decadal variability?
L709 simulated ensemble mean SDs
L710 and elsewhere. The SD should have units of Sv I believe
L732 clarity?
L738 This sounds like you’re saying that the densification is happening on the OSNAP section rather than that the upper/lighter waters are being densified to the north and then returning south. I think this needs a bit of clarification
L743 Why might the models be also freshening as well as cooling unlike the obs? Could this be related to too much mixing in overflows? Or something else?
L769 Note that UKMO25 captures the density though not T and S.
Fig 16 and others. It is has to see the deep values – it looks like the deep values are plotted first and then the shallower ones later, and in some cases it looks like they are plotting over the deep values and obscuring them. It might make the figures clearer to plot the shallower data first and deeper data last.
Section 8. There are a lot of points here which are not explored. There are other studies showing a stronger AMOC in more saline models eg https://doi.org/10.1175/JCLI-D-22-0464.1 https://doi.org/10.5194/os-17-59-2021 https://doi.org/10.1016/j.ocemod.2013.10.005 So discussion about how this seems a robust feature would be welcomed. Also some more discussion about density compensation would be useful. Are there any thoughts about why there is density compensation in some places but not others? What about density compensation in the observations? This paper suggests that there is density compensation for OW https://doi.org/10.1038/s41561-019-0517-1 How does that compare with what you see? Could biases in the models compared with the observations affect the density compensation? Finally you could do with some motivation about why you’re looking at Fig 18 as well as 17 and whether anything can be concluded from considering results together.
Fig 18 The caption is a little unclear. Is the y axis the total AMOC across both sections rather than OW and OE separately?
L886 total velocities -> net transports
L900 Are there differences in biases with resolution?
L929-935 I found this rather confusing – are you always using mean as time-mean and max as max over density? Is the point that you’re making here that the annual time mean of the maximum over density is not the same as the maximum over density of the annual time mean?
L1109 ‘the same numerical schemes and parameterizations’ – but do they not have different ice models? This sentence implies that the overall models have similar set up other than resolution, but this isn’t true. Maybe just needs a comment here.
Appendix B
It really isn’t clear why the focus here is on two models only, particularly since the authors not that the differences are likely to be model dependent. What is the aim of this section? If it is to document differences then surely it would make sense to plot them all. Otherwise please explain the motivation/aim of the section.
Citation: https://doi.org/10.5194/egusphere-2025-5406-RC1 -
RC2: 'Comment on egusphere-2025-5406', Dmitry Sidorenko, 28 Dec 2025
This work provides a valuable benchmark for the modelling community for evaluating simulations at the OSNAP section. It draws on six different ocean GCMs and a wide range of model simulations. All models are consistent with the earlier findings of Lozier et al. (2019), showing that OSNAP East largely determines the strength and variability of the AMOC. This further supports the conclusion that convection in the Labrador Sea may not, as previously thought, be the primary driver of AMOC variability.
Although all models reinforce this central message, the model spread at OSNAP is, unsurprisingly, substantial. The analysis is comprehensive and highly valuable, and the paper is clearly structured and professionally written. The manuscript would be suitable for publication in GMD following the authors’ responses to the minor comments listed below.
line 55. worth mentioning here that they cancel each other at constant depth and density.
line 116: Do you have any citations for this? I am unsure whether this statement is correct. One can certainly say many things about coupled versus forced AMOC simulations. Xu et al. (2019), if I remember correctly, reported different observations in their paper ‘On the variability of the Atlantic meridional overturning circulation transports in coupled CMIP5 simulations’ (https://doi.org/10.1007/s00382-018-4529-0). Gent (2018) also discussed this issue in ‘A commentary on the Atlantic meridional overturning circulation stability in climate models’ (https://doi.org/10.1016/j.ocemod.2017.12.006).
line 126: the findings by Ortega et al. 2012, 2021 were further confirmed using AWI-CM by Sidorenko et al. in AMOC variability and watermass transformations in the AWI climate model. Journal of Advances in Modeling Earth Systems, 13, e2021MS002582. https://doi.org/10.1029/2021MS002582
line 186: also in FESOM forced ocean by Sidorenko et al. (2020). AMOC, water mass transformations, and their responses to changing resolution in the Finite-volumE Sea ice-Ocean model. Journal of Advances in Modeling Earth Systems, 12, e2020MS002317. https://doi.org/10.1029/2020MS002317
line 188: I am a strong supporter of the work by Megann et al. (2021), which you cite above. Their interpretation is that Subpolar Mode Water formed in the northeastern Atlantic, which initially retains relatively high buoyancy, is advected into the Irminger Sea, where it loses buoyancy, and is subsequently transformed in the Labrador Sea—following further buoyancy loss—into Upper North Atlantic Deep Water before being exported southward. This view is consistent with McCartney and Talley (1982). Megann et al. (2021) also introduced a buoyancy-loss–ramped accumulation index that successfully reproduces the decadal variability of the AMOC, which could be a valuable approach to consider in the proposed follow-up study.
line 353: How would you expect the results to change if instantaneous sampling were used to compute the AMOC in density space?
line 590: If the transports within each bin are provided in Sv, I would suggest plotting the thin blue, red, and black lines in Figures 7–10 in a stepwise manner, since the transports are already aggregated within the respective bins (depth ranges).
line 708: Just a question: to what extent would a more realistic simulation of the Arctic Ocean improve the representation of the OSNAP transports?
line 785: Here you use the depth range above 700 m, whereas in the section above you considered depths deeper than 500 m for the T–S diagrams, in order to exclude the larger surface biases. How would Figures 17 and 18 change if deeper biases were considered instead?
line 834: More generally, in the context of model bias analysis, this suggests that the AMOC may transport water masses with similar (and less biased) densities that nonetheless occupy different regions of T–S space. This behaviour appears to be common across many regions of the global conveyor belt.
lines 874–876: Here, σ0 should also be explicitly mentioned and a reference to Fig. 6 added; otherwise, the reader may become confused.
line 886: The same correction as in the abstract applies here: both flows cancel each other at constant depth and density.
I wonder whether a figure similar to Fig. 2 in Lozier et al. (2019) would show comparable behavior across the models. Such a comparison could potentially serve as a useful benchmark. The same applies to their Fig. 3. Some brief discussion of this point would be nice to have.
Citation: https://doi.org/10.5194/egusphere-2025-5406-RC2 -
RC3: 'Comment on egusphere-2025-5406', Chuncheng Guo, 01 Jan 2026
This paper presents a coordinated comparison of modelled and observed overturning transports across OSNAP-West and -East sections, using a large number of forced OMIP-type simulations spanning a wide range of ocean/sea-ice models and resolutions. The paper is explicitly framed as descriptive and benchmarking, rather than mechanistic explorations, and is largely successful in achieving that goal. The topic is highly relevant for the ocean and climate modelling community, and it will provide valuable insights into how well ocean models capture the OSNAP overturning circulation in both depth and density space. The paper is overall well-written, and I only have some minor/technical comments.
- Near the end of the Introduction, the authors clearly state that the paper is “largely descriptive”, which is welcome; however, before that, there is heavy text with extensive discussions of, e.g., the control of Labrador vs. Irminger Seas, which might make it read like a hypothesis-driven AMOC mechanism paper. I am not sure if/how this could be improved, though - how about clarifying this somewhere earlier in the text? I am not sure if the authors share a similar view, and I will leave it to them to decide.
- The differences in results between sigma_0 and sigma_2 space (such as the relative contributions from OW) are important findings. For a broader audience, it would be beneficial if the authors, where appropriate, could expand and articulate a bit why using sigma_2 alters the results.
- L47, “...ocean - sea ice simulations…”
- L206, “compensation”
- L344-349, the authors claim that they “analyze the same forcing cycles for a given LR and HR set of simulations from the same group”; however, this does not seem to be the case in the following descriptions. For example, ANU10 (HR) is not in the same cycle as its LR counterparts of ANU1 and ANU25.
- L489, “FSU72”
- Figure 7 and Figure 8 (and other pairs of figures), here the figures are not separated as HR vs. LR (as in previous text and figures), but rather separated alphabetically - is there any rationale behind the change? It would be helpful to explain a bit, or at least mention this change in the text.
- Fig. 18, if I understand correctly, the MOC here is the total, i.e., OW+OE; is there a reason not to separate it, as in Fig. 17? Also, would it be useful to add the OSNAP to the scatter plots, as in Fig. 17?
Citation: https://doi.org/10.5194/egusphere-2025-5406-RC3
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 681 | 164 | 60 | 905 | 15 | 15 |
- HTML: 681
- PDF: 164
- XML: 60
- Total: 905
- BibTeX: 15
- EndNote: 15
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Line 1416 "We use adaptive-implicit vertical advection (Shchepetkin & McWilliams, 2005)" incorrect citation.
Should be (Shchepetkin, 2015) instead, referring to
https://www.sciencedirect.com/science/article/pii/S1463500315000530
instead.