the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The updated Multi-Model Large Ensemble Archive and the Climate Variability Diagnostics Package: New tools for the study of climate variability and change
Abstract. Observations can be considered as one realisation of the climate system that we live in. To provide a fair comparison of climate models with observations, one must use multiple realisations or ensemble members from a single model and assess where the observations sit within the ensemble spread. Single model initial-condition large ensembles (LEs) are valuable tools for such an evaluation. Here, we present the new multi-model large ensemble archive (MMLEAv2) which has been extended to include 18 models and 15 two-dimensional variables. Data in this archive has been remapped to a common 2.5 x 2.5 degree grid for ease of inter-model comparison. We additionally introduce the newly updated Climate Variability Diagnostics Package version 6 (CVDPv6), which is designed specifically for use with LEs. The CVDPv6 computes and displays the major modes of climate variability as well as long-term trends and climatologies in models and observations based on a variety of fields. This tool creates plots of both individual ensemble members, and the ensemble mean of each LE including observational rank plots, pattern correlations and root mean square difference metrics displayed in both graphical and statistical output that is saved to a data repository. By applying the CVDPv6 to the MMLEAv2 we highlight its use for model evaluation against observations and for model inter-comparisons. We demonstrate that for highly variable metrics a model might evaluate poorly or favourably compared to the single realisation the observations represent, depending on the chosen ensemble member. This behaviour emphasises that LEs provide a much fairer model evaluation than a single ensemble member, ensemble mean, or multi-model mean.
- Preprint
(29512 KB) - Metadata XML
-
Supplement
(82 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
CEC1: 'Comment on egusphere-2024-3684', Juan Antonio Añel, 05 Feb 2025
Dear authors,
After checking your manuscript we think that it is not in compliance with the policy of our journal. You present the outputs from simulations performed with 18 climate models. However, you do not provide repositories for the code of the models that you have used. According to our policy (https://www.geoscientific-model-development.net/policies/code_and_data_policy.html) you must publish the code of any model that you have used to produce your manuscript.
Therefore, we have to request that you reply to this comment with the links and permanent handle (e.g. DOI) for each one of the repositories containing the code of the models involved in producing the results that you present. Also, you should include the information in any potentially reviewed version of your manuscript.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/egusphere-2024-3684-CEC1 -
AC1: 'Reply on CEC1', Nicola Maher, 07 Feb 2025
Dear Juan A. Añel,Thank you for your interest and comments on our manuscript.We disagree that the manuscript is not in compliance with the policy of the journal.We present a regridded archive of 18 climate model simulations, but this is a compilation of existing data, and the simulations and code have been documented in other papers. The model description papers for all 18 climate models are correctly referenced in Table 1. Our methodology for regridding and standardizing the model output for public use is documented extensively in the paper.We also present a new version of the Climate Variability and Diagnostics Package (CVDP) as developed by the authors of this paper. We have created a frozen code version that is referenced to zenodo in the paper: this can be found in the code and data availability section of the paper which states: ll processed data, and a frozen version of the CVDPv6 code used in this publication are available on zenodo at doi:10.5281/zenodo.14292580.Kind regards,Nicola Maher & Co-authors.Citation: https://doi.org/
10.5194/egusphere-2024-3684-AC1 -
CEC2: 'Reply on AC1', Juan Antonio Añel, 10 Feb 2025
Dear authors,
Thanks for the reply. Unfortunately, I have to disagree with you. In this manuscript you present the MMLEAv2, which is an ensemble of model simulations. Therefore, you are publishing a paper on results from simulations (at least part of your manuscript is on this). These simulations are produced with models. Therefore, to replicate your work, it is necessary to have the climate models that produce the MMLEAv2, to run them. In the Table 1 of your manuscript you cite "papers", not repositories for the code of the models. For example, I have checked for the first model in the table and followed the reference cited. This takes to the reader to a published paper which does not contain a single mention on how to get access to the code of the model. Therefore, what you must cite in Table 1 are the repositories containing the models, not the papers describing them. You can cite the papers, but information about the permanent repositories for the models' code is mandatory.
Also, we would thank if you can reply to this comment with an update on the repository in the NCAR RDA that contains the data. A sensible amount of time has passed since you submitted your manuscript, and we would expect that the upload of the data had finished and datasets are public now. in this regard, your manuscript should have not been accepted in Discussions because of such shortcoming. The procedure is to publish the data and then submit the manuscript, not submit the manuscript and claim that the data will be made public.
At this point, I recommend to the Topical Editor to stall the review process for your manuscript until all the outstanding issues related to compliance with the code and data policy of the journal are solved.
Please, reply to the issues mentioned above as soon as possible.
Juan A. Añel
Geosci. Model Dev. Executive Editor
Citation: https://doi.org/10.5194/egusphere-2024-3684-CEC2 -
CEC3: 'Reply on CEC2', David Ham, 22 Feb 2025
After further discussion among the executive editors, we have come to the conclusion that the previous post stems from confusion on the part of the executive editor about whether new simulations were conducted for this manuscript. It is clear that they were not. There is therefore no need for the simulation model code to be archived as a part of this manuscript.
The outstanding code and data issues to be addressed are:
- Citing a persistent public archive of the MMLEAv2. This should have happened before submission as GMD does not permit the submission of manuscripts based on embargoed data.
- Ensuring that the citations for data used as inputs to the creation of MMLEAv2 are comprehensive, persistent and correct. Currently many of these are simply bare URLs, not all of which even resolve.
Apologies for the confusion.
Citation: https://doi.org/10.5194/egusphere-2024-3684-CEC3
-
CEC3: 'Reply on CEC2', David Ham, 22 Feb 2025
-
CEC2: 'Reply on AC1', Juan Antonio Añel, 10 Feb 2025
-
AC1: 'Reply on CEC1', Nicola Maher, 07 Feb 2025
-
RC1: 'Comment on egusphere-2024-3684', Anonymous Referee #1, 05 Feb 2025
The manuscript documents the updated multi-model large ensemble archive and the climate variability diagnostics package version 6 (CVDPv6). Examples of different analyses available with CVDPv6 are also provided.
Overall the paper provides a reference for two extremely valuable resources and should be published. However, the analysis highlighted does not stress the need to critically evaluate the fidelity of the modelled forced response. In particular, potential signal to noise errors in the forced response would directly impact the validity of the modelled internal variability, with major implications for any comparison to observations. The multi-model large ensemble archive is a key tool to make progress here and I recommend adding some discussion on this.
Line 19: state that it is the modelled forced response and internal variability that can be quantified - the real world could be very different.
Line 29: evaluating whether the observations sit within the model spread is insufficient - the fidelity of the model response to external forcings (and relatedly its representation of internal variability) is crucial.
Line 74: I believe there are several other LEs in the CMIP6 archive. at least over the historical period - are there plans to include these?
Line 85: including winds at different levels would be a great addition - most extremes are related to atmospheric circulation and winds are crucial for assessing the forced response and internal variability of extremes.
Line 138: detrending does not separate external forcing and internal variability since some external forcing including aerosols, solar and volcanoes, produce variability beyond the trend.
Line 143: please add 'modelled' to 'internal variability'.
Line 151: could you provide a link to the CVDPv6 output webpage please?
Line 152: should this say Table 3?
Line 153: would that be the average pattern correlation for individual members?
Line 175: please check the link - it didn't work for me
Figure 1: apologies I don't understand this, what does a pattern correlation mean for an index such as ENSO which is a timeseries? Also, are all panels needed or would it be better to illustrate with one or two?
Figure 2: same comment as Fig 1, also what are the colours?
Line 220: but presumably evaluating more members is better than a single member?
Figure 3: Please provide a legend for the ensemble summary panel
Figure 4: There are far too many panels and the text is impossible to read. I suggest just showing a couple of models to illustrate the type of output. It appears that the model patterns are based on the ensemble means which should not be compared to observations - if this is not the case please clarify. The observations are the same in every row so really do not need to be repeated.
Lines 247-267: This ignores potential impacts of forcings on the modes that could be very important. I am certain many of the authors are aware of this, indeed I believe the lead author has written about volcanic impacts on ENSO, so I am puzzled why this is ignored here.
Figure 5: This appears to illustrate the same output as Fig 4 so could be removed
Figure 6: I presume the dark blue curve shows the model power spectrum rather than the timeseries? Again, I think the average of ensemble members rather than the ensemble mean should be compared with observations. The caption states that observations are grey but it looks more like brown to me.
Line 271: also worth pointing out that La Nina rainfall impacts are not well represented over up to 40% of the globe. Is there a way to highlight, perhaps with stippling, where the observations lie outside the range of ensemble members?
Figure 8: GISS-E2-G,CESM2 and UKESM1-0-LL are not shown
Lines 278-293: Highlighting model differences is very important, but I suggest also highlighting the need to address these differences perhaps through emergent constraints.
Figure 9: the period in the caption (1950-1958) is not consistent with the plots (1950-1959). Again, I suggest showing fewer models to illustrate the output more clearly.
Line 296: Can CVDPv6 also compare different scenarios?
Citation: https://doi.org/10.5194/egusphere-2024-3684-RC1 -
AC3: 'Reply on RC1', Nicola Maher, 04 Mar 2025
Response to Reviewer 1:
The manuscript documents the updated multi-model large ensemble archive and the climate variability diagnostics package version 6 (CVDPv6). Examples of different analyses available with CVDPv6 are also provided.
Overall the paper provides a reference for two extremely valuable resources and should be published. However, the analysis highlighted does not stress the need to critically evaluate the fidelity of the modelled forced response. In particular, potential signal to noise errors in the forced response would directly impact the validity of the modelled internal variability, with major implications for any comparison to observations. The multi-model large ensemble archive is a key tool to make progress here and I recommend adding some discussion on this.
We fully agree with the Reviewer's points and will add some discussion to address these issues.
Line 19: state that it is the modelled forced response and internal variability that can be quantified - the real world could be very different.
We will change this on revision as suggested.
Line 29: evaluating whether the observations sit within the model spread is insufficient - the fidelity of the model response to external forcings (and relatedly its representation of internal variability) is crucial.
In our revision we will add a statement that the MMLEAv2 could also help to evaluate the amplitude of internal variability in models, which influences the results of the percentile analysis.
Line 74: I believe there are several other LEs in the CMIP6 archive. at least over the historical period - are there plans to include these?
We made the choice to add any LEs we could obtain the data to from CMIP6 that has historical + ssp370 or ssp585. We do not plan to add additional LEs at this point.
Line 85: including winds at different levels would be a great addition - most extremes are related to atmospheric circulation and winds are crucial for assessing the forced response and internal variability of extremes.
This would indeed be useful as would some other variables. We have chosen to make the archive public now to enable scientific research with what we have rather than spend more time adding additional variables.
Line 138: detrending does not separate external forcing and internal variability since some external forcing including aerosols, solar and volcanoes, produce variability beyond the trend.
We will clarify the word ‘detrending in the revision.
Line 143: please add 'modelled' to 'internal variability'.
We will change this on revision as suggested.
Line 151: could you provide a link to the CVDPv6 output webpage please?
We will add this on revision as suggested.
Line 152: should this say Table 3?
Yes – we will fix it thanks.
Line 153: would that be the average pattern correlation for individual members?
To be clarified on revision
Line 175: please check the link - it didn't work for me
Apologies – it works but the website went offline for a week and this likely when you tried to access it.
Figure 1: apologies I don't understand this, what does a pattern correlation mean for an index such as ENSO which is a timeseries? Also, are all panels needed or would it be better to illustrate with one or two?
This is the ENSO pattern – i.e. the spatial Pattern in the tropical Pacific.
Figure 2: same comment as Fig 1, also what are the colours?
The colours are the RMS value – we will add this to the caption.
Line 220: but presumably evaluating more members is better than a single member?
Yes – but perhaps not necessary – we will clarify what we meant by this on revision.
Figure 3: Please provide a legend for the ensemble summary panel
The colours come from the titles in the plots above – we will clarify in the caption.
Figure 4: There are far too many panels and the text is impossible to read. I suggest just showing a couple of models to illustrate the type of output. It appears that the model patterns are based on the ensemble means which should not be compared to observations - if this is not the case please clarify. The observations are the same in every row so really do not need to be repeated.
This is already a subset of models and our preference is to keep it as is.
Lines 247-267: This ignores potential impacts of forcings on the modes that could be very important. I am certain many of the authors are aware of this, indeed I believe the leadrecieved author has written about volcanic impacts on ENSO, so I am puzzled why this is ignored here.
Line 250 already addresses this point by saying that ‘We note that changes in the variability itself are not removed using this methodology, so changes in the variability itself can be assessed using this method.’
We will add due to external forcing e.g. GHG/volcanoes to the revised text.
Figure 5: This appears to illustrate the same output as Fig 4 so could be removed
Figure 5 is the PDV while Figure 4 is the NAO so they show different modes of variability.
Figure 6: I presume the dark blue curve shows the model power spectrum rather than the timeseries? Again, I think the average of ensemble members rather than the ensemble mean should be compared with observations. The caption states that observations are grey but it looks more like brown to me.
We are not sure what you mean by ‘average of ensemble members’ is this not the ensemble mean? The caption will be changed to say grey/brown.
Line 271: also worth pointing out that La Nina rainfall impacts are not well represented over up to 40% of the globe. Is there a way to highlight, perhaps with stippling, where the observations lie outside the range of ensemble members?
This is already shown in the ranks on the RHS.
Figure 8: GISS-E2-G,CESM2 and UKESM1-0-LL are not shown
This is because they are not in set2 models (see caption).
Lines 278-293: Highlighting model differences is very important, but I suggest also highlighting the need to address these differences perhaps through emergent constraints.
We will add a sentence on this point in revision.
Figure 9: the period in the caption (1950-1958) is not consistent with the plots (1950-1959). Again, I suggest showing fewer models to illustrate the output more clearly.
As previously we would like to keep all models here, caption will be fixed.
Line 296: Can CVDPv6 also compare different scenarios?
Yes, the CVDP can compare different scenarios (across different times) in the same comparison. The CVDP can read in data from any simulation if the data follows CMIP's file convention format.
Citation: https://doi.org/10.5194/egusphere-2024-3684-AC3
-
AC3: 'Reply on RC1', Nicola Maher, 04 Mar 2025
-
RC2: 'Comment on egusphere-2024-3684', Anonymous Referee #2, 18 Feb 2025
Review of “The updated Multi-Model Large Ensemble Archive and the Climate Variability Diagnostics Package: New tools for the study of climate variability and change” by Maher et al.
General comments
The authors present a valuable contribution to the field of climate model evaluation, the Climate Variability Diagnostics Package (CVDP), which is developed for collective evaluation of simulated climate variability modes. The manuscript is well-structured, and the availability of CVDP results for the Multi-Model Large Ensemble Archive (MMLEA) alongside the tool itself represents an important resource for the community. I appreciate the authors' efforts in developing CVDPv6 and see its value. However, it would be helpful to clarify what is novel beyond the integration of CVDP and CVDP-LE. Does CVDPv6 introduce any new scientific advancements or methodological improvements? Highlighting these aspects in a more clear way would further strengthen the manuscript.
Specific comments
Section 3.1: It would be helpful to include some technical details about the CVDP, such as the programming language it is implemented in and its compatibility with different operating systems.
Section 3.3: While I was able to access the CVDPv6 output for MMLEAv2 at https://webext.cgd.ucar.edu/Multi-Case/MMLEA_v2/ as indicated in the section, the current hosting approach—being a directory on a web server—raises concerns about long-term stability and version control. Compared to more permanent data archiving solutions such as DOI-based repositories (e.g., Zenodo), this method may not ensure robust traceability. Are there any plans to improve the long-term accessibility and versioning of these data products?
It would be beneficial to discuss potential linkages and/or interaction plans with other established evaluation tools, such as the Earth System Model Evaluation Tool (ESMValTool; Eyring et al., 2016), PCMDI Metrics Package (PMP; Lee et al., 2024), and Climate Model Assessment Tool (CMAT; Fasullo, 2020). In my knowledge, CVDP contributes to ESMValTool, and similar capability is also available in the PMP that can help (or already helped?) cross validation of the tools. Addressing these (potential) connections would provide readers with a comprehensive understanding of CVDP's role within the broader ecosystem of climate model analysis and evaluation tools.
Figures: Some multi-panel figures appear overly complex and difficult to read due to their small size. Is there a way to simplify these figures while maintaining clarity? For instance:
- In Figure 7, the middle column seems to repeat the same plot (ERA5_1) across multiple rows.
- In Figure 8, the second column (MEM) appears to contain redundant plots.
- In Figure 9, the third column (MEM Change) also seems repetitive.
- Figures 4 and 5 may similarly benefit from reorganization.
The authors could consider reducing the number of plots by eliminating redundancies and restructuring the panel layout to improve readability.
Overall, this manuscript provides an important resource for the climate modeling community. Addressing the points above would further enhance its clarity, impact, and accessibility.
References
Eyring, V., Righi, M., Lauer, A., Evaldsson, M., Wenzel, S., Jones, C., Anav, A., Andrews, O., Cionni, I., Davin, E. L., Deser, C., Ehbrecht, C., Friedlingstein, P., Gleckler, P., Gottschaldt, K.-D., Hagemann, S., Juckes, M., Kindermann, S., Krasting, J., Kunert, D., Levine, R., Loew, A., Mäkelä, J., Martin, G., Mason, E., Phillips, A. S., Read, S., Rio, C., Roehrig, R., Senftleben, D., Sterl, A., van Ulft, L. H., Walton, J., Wang, S., and Williams, K. D.: ESMValTool (v1.0) – a community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP, Geosci. Model Dev., 9, 1747–1802, https://doi.org/10.5194/gmd-9-1747-2016, 2016.
Fasullo, J. T.: Evaluating simulated climate patterns from the CMIP archives using satellite and reanalysis datasets using the Climate Model Assessment Tool (CMATv1), Geosci. Model Dev., 13, 3627–3642, https://doi.org/10.5194/gmd-13-3627-2020, 2020.
Lee, J., Gleckler, P. J., Ahn, M.-S., Ordonez, A., Ullrich, P. A., Sperber, K. R., Taylor, K. E., Planton, Y. Y., Guilyardi, E., Durack, P., Bonfils, C., Zelinka, M. D., Chao, L.-W., Dong, B., Doutriaux, C., Zhang, C., Vo, T., Boutte, J., Wehner, M. F., Pendergrass, A. G., Kim, D., Xue, Z., Wittenberg, A. T., and Krasting, J.: Systematic and objective evaluation of Earth system models: PCMDI Metrics Package (PMP) version 3, Geosci. Model Dev., 17, 3919–3948, https://doi.org/10.5194/gmd-17-3919-2024, 2024.
Citation: https://doi.org/10.5194/egusphere-2024-3684-RC2 -
AC4: 'Reply on RC2', Nicola Maher, 04 Mar 2025
Response to Reviewer 2:
General comments
The authors present a valuable contribution to the field of climate model evaluation, the Climate Variability Diagnostics Package (CVDP), which is developed for collective evaluation of simulated climate variability modes. The manuscript is well-structured, and the availability of CVDP results for the Multi-Model Large Ensemble Archive (MMLEA) alongside the tool itself represents an important resource for the community. I appreciate the authors' efforts in developing CVDPv6 and see its value. However, it would be helpful to clarify what is novel beyond the integration of CVDP and CVDP-LE. Does CVDPv6 introduce any new scientific advancements or methodological improvements? Highlighting these aspects in a more clear way would further strengthen the manuscript.
Thank you for your positive feedback. There are indeed methodological improvements such as the inclusion of multiple detrending methods in the CVDPv6. We will add text to highlight this on revision.
Specific comments
Section 3.1: It would be helpful to include some technical details about the CVDP, such as the programming language it is implemented in and its compatibility with different operating systems.
The CVDP is written in NCL (The NCAR Command Language). NCL can be installed on most commonly used operating systems. We hope to have a python version of the CVDP available within the next year.
Section 3.3: While I was able to access the CVDPv6 output for MMLEAv2 at https://webext.cgd.ucar.edu/Multi-Case/MMLEA_v2/ as indicated in the section, the current hosting approach—being a directory on a web server—raises concerns about long-term stability and version control. Compared to more permanent data archiving solutions such as DOI-based repositories (e.g., Zenodo), this method may not ensure robust traceability. Are there any plans to improve the long-term accessibility and versioning of these data products?
This is a good point. CVDP output (available from the CVDP Data Repository) has been available from NCAR's website for 10+ years now, and up to this point no one has questioned whether this is the best way to go about distributing it. Currently we do not offer versioning of the output data. We will consider both versioning and increasing accessibility in the future.
It would be beneficial to discuss potential linkages and/or interaction plans with other established evaluation tools, such as the Earth System Model Evaluation Tool (ESMValTool; Eyring et al., 2016), PCMDI Metrics Package (PMP; Lee et al., 2024), and Climate Model Assessment Tool (CMAT; Fasullo, 2020). In my knowledge, CVDP contributes to ESMValTool, and similar capability is also available in the PMP that can help (or already helped?) cross validation of the tools. Addressing these (potential) connections recievedwould provide readers with a comprehensive understanding of CVDP's role within the broader ecosystem of climate model analysis and evaluation tools.
An earlier version of the CVDP can indeed be run through ESMValTool. The CVDP can also be run from CESM's AMWG Diagnostics Framework (ADF) package and soon from within the CESM Unified Postprocessing and Diagnostics (CUPiD) package. Amongst many other uses, the CVDP is used alongside the CMAT package to evaluate CESM within the model's development cycle.
Figures:Some multi-panel figures appear overly complex and difficult to read due to their small size. Is there a way to simplify these figures while maintaining clarity? For instance:
-
In Figure 7, the middle column seems to repeat the same plot (ERA5_1) across multiple rows.
-
In Figure 8, the second column (MEM) appears to contain redundant plots.
-
In Figure 9, the third column (MEM Change) also seems repetitive.
-
Figures 4 and 5 may similarly benefit from reorganization.
The authors could consider reducing the number of plots by eliminating redundancies and restructuring the panel layout to improve readability.
We understand the redundancy but want to show what is in the CVDP and how it plots so we prefer to leave these as they are.
Citation: https://doi.org/10.5194/egusphere-2024-3684-AC4 -
-
EC1: 'Comment on egusphere-2024-3684', Penelope Maher, 24 Feb 2025
Dear Nicola Maher and co-authors,
Before I provide further clarification on this manuscript, I would like to address any perceived conflict of interest in me handling this paper as a Topic Editor. I do know the lead author and we do share the same surname. However, we are not related and have not collaborated. As such, I feel I can be impartial in my assessment of this manuscript.
The Chief Editors have identified three areas which caused them concern when overseeing this manuscript. The majority of the confusion has come about due to the wording of the code and data availability section, which I will expand on below. The following points have been raised.
1. Climate model code would need to be provided. This has now been further clarified and this is not required.
2. The embargo on MMLEAv2 data. The confusion has come about due to having the full MMLEAv2 dataset that is not yet hosted (you have embargoed it) and the additional data which has been provided in the zenodo repo (at my request to comply with GMD's policy) in order to reproduce the results on the manuscript. I believe there was confusion about what was in the zenodo repo and what is pending. I would like to suggest that you publish your data and update the data availability to reflect this. Could you also comment on if any permanent data archiving is planned or any way of version control or data DOIs may be managed?
3. The reference/sources to model data are not yet complete. The manuscript needs to provide sufficient information so that someone reading your paper could reproduce your dataset and results. Are their persistent DOI's, data citations or similar you could include? You mention data from two colleagues, could you expand on these runs please? How would someone get the data, is it available, would someone need to contact the author etc (anything relevant really)? Please note that the following link does not work https://esgf-data.dkrz.de/projects/mpi-ge
The two reviewers have provided their comments on the paper. Could you address their reviews and resubmit the paper please?
Regards,
Penny Maher
Citation: https://doi.org/10.5194/egusphere-2024-3684-EC1 -
AC2: 'Reply on EC1', Nicola Maher, 04 Mar 2025
Response to Editor comments below:
1. Climate model code would need to be provided. This has now been further clarified and this is not required.
Thank you for clarifying this point.
2. The embargo on MMLEAv2 data. The confusion has come about due to having the full MMLEAv2 dataset that is not yet hosted (you have embargoed it) and the additional data which has been provided in the zenodo repo (at my request to comply with GMD's policy) in order to reproduce the results on the manuscript. I believe there was confusion about what was in the zenodo repo and what is pending. I would like to suggest that you publish your data and update the data availability to reflect this. Could you also comment on if any permanent data archiving is planned or any way of version control or data DOIs may be managed?
We apologise for the delay. The data has been submitted for permanent publication (which can include a DOI) we were advised it would be online in early January, but there have been substantial delays. We will ensure it is published and fully accessible before we submit the revision.
3. The reference/sources to model data are not yet complete. The manuscript needs to provide sufficient information so that someone reading your paper could reproduce your dataset and results. Are their persistent DOI's, data citations or similar you could include? You mention data from two colleagues, could you expand on these runs please? How would someone get the data, is it available, would someone need to contact the author etc (anything relevant really)? Please note that the following link does not work https://esgf-data.dkrz.de/projects/mpi-ge
We have provided citations to the model description papers in the table and the links to where we originally obtained the data in text to make sure the original data can be found. Only one dataset is only available on request (via email; GFDL-ESM2M), although we note a reference paper for this dataset is available. All other model data is now published. We have followed up with the model developers regarding the broken links – and are waiting for a response. To this point in the revision we will add source code links and persistent URLs for source datasets that theMMLEAv2 is based on, where they are available. We note that DOIs do not exist for all datasets, but will add them wherever possible. We additionally note that the ETH cmip6-ng dataset is based on data from ESGF and then processed following Brunner et al. 2020 as part of the ETH cmip6-ng effort; we will make this clear that the original data source is CMIP6 ESGF in the revision.
Example DOI to be added (for model CESM2):
Danabasoglu, Gokhan(2019).NCAR CESM2 model output prepared for CMIP6 CMIP.Version YYYYMMDD[1].Earth System Grid Federation.https://doi.org/10.22033/ESGF/CMIP6.2185
Citation: https://doi.org/10.5194/egusphere-2024-3684-AC2
-
AC2: 'Reply on EC1', Nicola Maher, 04 Mar 2025
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
393 | 117 | 21 | 531 | 39 | 12 | 11 |
- HTML: 393
- PDF: 117
- XML: 21
- Total: 531
- Supplement: 39
- BibTeX: 12
- EndNote: 11
Viewed (geographical distribution)
Country | # | Views | % |
---|---|---|---|
United States of America | 1 | 240 | 46 |
Australia | 2 | 36 | 6 |
Germany | 3 | 27 | 5 |
France | 4 | 24 | 4 |
China | 5 | 20 | 3 |
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
- 240