the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The Framework for Assessing Changes To Sea-level (FACTS) v1.0-rc: A platform for characterizing parametric and structural uncertainty in future global, relative, and extreme sea-level change
Abstract. Future sea-level rise projections are characterized by both quantifiable uncertainty and unquantifiable, structural uncertainty. Thorough scientific assessment of sea-level rise projections requires analysis of both dimensions of uncertainty. Probabilistic sea-level rise projections evaluate the quantifiable dimension of uncertainty; comparison of alternative probabilistic methods provide an indication of structural uncertainty. Here we describe the Framework for Assessing Changes To Sea-level (FACTS), a modular platform for characterizing alternative probability distributions of global mean, regional, and extreme sea-level rise. We demonstrate its application by generating seven alternative probability distributions under multiple alternative emissions scenarios for both future global mean sea level and future relative and extreme sea level at New York City. These distributions, closely aligned with those presented in the Intergovernmental Panel on Climate Change Sixth Assessment Report, emphasize the role of the Antarctic and Greenland ice sheet as drivers of structural uncertainty in sea-level rise projections.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1037 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1037 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2023-14', Benjamin Grandey, 03 Feb 2023
The authors provide a clear description and helpful discussion of a useful tool, FACTS.
I have a few minor comments and questions for the authors' consideration.
L53: The acronym FAIR is used in two senses: (i) "Findable, Accessible, Interoperable, Reusuable" (only once, at L53), and (ii) the FAIR climate model. Only the first of these is defined. To avoid potential confusion, could the acronym be reserved for the FAIR climate model?
L165: Should "FAIR 1.0" not be "FACTS 1.0"?
L250-252: When calculating thermosteric sea level rise, the authors state that the tlm/sterodynamics module uses ocean heat content (OHC) from the fair/temperature module, following Fox-Kemper et al. (2021). Does FAIR produce the OHC data, or is the climate simulation step bypassed (L150)? I understand that Fox-Kemper et al. (2021) used OHC from a two-layer energy balance model. Have I misunderstood something?
Table 1: Does every module sample a distribution? Or do some modules produce only a single time series? In particular, I am a little confused by the nature of the output from the land water storage and vertical land motion modules (L271-297).
L345: Should "Table 2" not be "Table 1"?
L375-378: I understand that Bamber et al. (2019) sought to account for dependence between the ice sheet components. Is this dependence preserved in workflow 4? Does this explain the large positive interaction evident in the early decades of workflow 4 (Fig. 4 bottom row)? If so, why does this positive interaction decrease in later decades?
Figures 2, 3, and 5 captions: Should "think" not be "thin"?
Citation: https://doi.org/10.5194/egusphere-2023-14-CC1 -
AC1: 'Reply on CC1', Robert Kopp, 16 May 2023
- L53: The acronym FAIR is used in two senses: (i) "Findable, Accessible, Interoperable, Reusuable" (only once, at L53), and (ii) the FAIR climate model. Only the first of these is defined. To avoid potential confusion, could the acronym be reserved for the FAIR climate model?
Thank you for the suggestion. We will replace the reference to “FAIR science” to “open science”. We have also adjusted the acronym used for the FaIR climate model to have a lowercase ‘a’, which has become the preferred usage.
- L165: Should "FAIR 1.0" not be "FACTS 1.0"?
You are correct. Thank you for correcting the error.
- L250-252: When calculating thermosteric sea level rise, the authors state that the tlm/sterodynamics module uses ocean heat content (OHC) from the fair/temperature module, following Fox-Kemper et al. (2021). Does FAIR produce the OHC data, or is the climate simulation step bypassed (L150)? I understand that Fox-Kemper et al. (2021) used OHC from a two-layer energy balance model. Have I misunderstood something?
The two-layer energy balance model is incorporated into FaIR as an alternative representation of the forcing-temperature coupling, based on Geoffroy et al. (2013). (See https://docs.fairmodel.net/en/v1.6.3/examples.html#geoffroy-temperature-function) . We will add a parenthetical comment in the text to this effect.
- Table 1: Does every module sample a distribution? Or do some modules produce only a single time series? In particular, I am a little confused by the nature of the output from the land water storage and vertical land motion modules (L271-297).
Yes, every module produces a distribution. For land water storage, we will note: “Uncertainty in the projections is generated by sampling the parameters of the sigmoidal fit for reservoir storage and linear fit for groundwater depletion.” For VLM: “Uncertainty in the projections is generated based on the uncertainty in the estimate of the constant trend.”
- L345: Should "Table 2" not be "Table 1"?
Yes, thank you.
- L375-378: I understand that Bamber et al. (2019) sought to account for dependence between the ice sheet components. Is this dependence preserved in workflow 4? Does this explain the large positive interaction evident in the early decades of workflow 4 (Fig. 4 bottom row)? If so, why does this positive interaction decrease in later decades?
The samples used in workflow 4 for the ice sheets are directly taken from those reported by Bamber et al. (2019), preserving all correlations therein. While a deep dive into the correlations is beyond the scope of this manuscript, it is by construction consistent with the Monte Carlo samples of ice sheets generated in Bamber et al 2019, because the samples are used directly as a static input. Bear in mind that (1) the absolute variance early in the 21st century are quite small, and (2) correlations in Bamber et al. (2019) were not elicited early in the 21st century, so I would read too much into the high positive correlation in the early years of the decade – the import of this correlation is small, given the small overall variance.
- Figures 2, 3, and 5 captions: Should "think" not be "thin"?
Yes, thank you.
Citation: https://doi.org/10.5194/egusphere-2023-14-AC1
-
AC1: 'Reply on CC1', Robert Kopp, 16 May 2023
-
CC2: 'Clarifiation about the method to project ODSL', Dewi Le Bars, 06 Feb 2023
Dear authors,
Thank you for this effort to clearly describe this framework and to make the code openly available. This work is of great value to the community.
I do not understand the description of the method for ODSL in l.255-258:
“The resulting global mean thermosteric sea-level rise is then combined with ocean dynamic sea-level change and the inverse barometer effect using the gridded output of CMIP6 models (see the right column of Table A1 for the models that were used in (Fox-Kemper et al., 2021)), based on the time-varying correlation structure between global mean thermal expansion and ocean dynamic sea-level change in the multi-model ensemble.”I don’t think that a correlations structure is enough information to reconstruct ODSL from global mean thermal expansion. Is this method similar to Palmer et al. 2020? (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019EF001413) There, it is described as a linear regression but instead of being between global mean thermal expansion and ODSL it is between global mean thermal expansion and sterodynamic sea-level change:
“Following previous studies (Bilbao et al., 2015; Palmer, Howard, et al., 2018; Perrette et al., 2013), the effects of local changes in ocean density and circulation are included by establishing regression relationships between global thermal expansion and local sterodynamic sea-level change in CMIP5 climate model simulations”Your clarifications on this point would be very much appreciated. In some places the assumption of linear regression between global mean thermal expansion and ocean dynamic sea-level change is not very accurate, therefore making this assumption clearer would help people decide if this framework works for them or not.
Thank you,
Dewi Le BarsCitation: https://doi.org/10.5194/egusphere-2023-14-CC2 -
AC2: 'Reply on CC2', Robert Kopp, 16 May 2023
Dear Dewi,
Thank you for the question. We will add an appendix to the following effect:
The method used is a modification of that described in Kopp et al. (2014). Global mean thermal expansion projections are generated from the two-layer model. Ocean dynamic sea level is assumed to have a degree of correlation with global mean thermal expansion, with the correlation assessed on a grid-cell basis based on the CMIP6 ensemble for a particular SSP scenario. Given a sample of 19-year-average global mean thermal expansion y at a particular point in time, 19-year-average ocean dynamic sea level z is taken as distributed following a t-distribution with a conditional mean of
z'(r) + σ(r) k(r) (y - y' )/s
And a conditional standard deviation proportional to
σ(r) (1-k(r)2)
Where z'(r) is the multimodel mean ocean dynamic sea level at location r, σ(r) is the multimodel standard deviation, k(r) is the correlation between global mean thermal expansion and ocean dynamic sea level z(r), y' is the multimodal mean global mean thermal expansion and s is the multimodel standard deviation of global mean thermal expansion. The standard deviation is inflated relative to that of the ensemble to account for the expert judgment that the 5-95th percentile of the ensemble may have as much as a 33% of being exceeded on either end (ie the 5-95th percentile range is treated as a likely range). Though the parameters of this regression model are re-fit for each time point, correlation across time is preserved (perhaps excessively) in sampling by drawing (via Latin hypercube sampling) a single quantile of the variance characterized by the conditional standard deviation to use at all time points for a given time series sample. In sampling the t-distribution, the number of degrees of freedom is taken as the number of GCMs providing DSL projections for a particular grid cell in the scenario used for calibration.
In some ways, the approach is similar to that of a linear-regression based scaling of ocean dynamic sea level on global mean thermal expansion, as in Palmer et al. (2020). The commonality is the assumption that the distribution of ocean dynamic sea level at a given point may be constrained by information about global mean thermal expansion. (“May” is an operative word here — it is also possible for the scaling factor or correlation coefficient to be zero).
One important difference is that this approach is recalibrated for each time step, whereas the Palmer et al. approach finds a single regression coefficient for a given GCM across time. A second is the uncertainty not captured in the characterized correlation is sampled, whereas in Palmer et al., all variance is assumed to be captured by the spread of regression coefficients across GCMs. The approach used here is more focused on the distributional characteristics across GCMs, as opposed to representing each individual GCM by a regression coefficient. As a consequence of these differences, the Kopp et al. (2014) approach loses a degree of traceability to individual GCMs, being instead focused on preserving the distributional properties assessed based on the ensemble.
Note that where thermal expansion and ocean dynamic sea level are uncorrelated, this approach returns simply the multimodel mean and scaled standard deviation for the scenario.
Best,
The authors
Citation: https://doi.org/10.5194/egusphere-2023-14-AC2
-
AC2: 'Reply on CC2', Robert Kopp, 16 May 2023
-
CC3: 'Comment on egusphere-2023-14', Vanessa Völz, 08 Feb 2023
Very helpful discription of FACTS!
Chapter 3, L334: Aren't there 20,000 instead of 2,000 Monte Carlo samples?
Citation: https://doi.org/10.5194/egusphere-2023-14-CC3 -
AC3: 'Reply on CC3', Robert Kopp, 16 May 2023
Thank you! This manuscript describes FACTS 1.0 and demonstrates it using a set of modules that were developed, in part, to support the AR6 assessment. It does not document the AR6 sea level projections, for which 20,000 Monte Carlo samples were run; the demonstration here uses 2,000 Monte Carlo samples.
Citation: https://doi.org/10.5194/egusphere-2023-14-AC3
-
AC3: 'Reply on CC3', Robert Kopp, 16 May 2023
-
RC1: 'Comment on egusphere-2023-14', Anonymous Referee #1, 25 Apr 2023
The authors present the FACTS framework to probabilistically estimate future sea level rise, globally and regionally. The framework aims to make it possible to seamlessly exchange individual drivers of global sea level rise so structural uncertainty can be explored. The presented work stands out as it underpins several authoritative sea level assessments, of which the most prominent is the IPCC AR6 WG1 assessment.
Due to this special position of FACTS, usability and replicability is a key concern. I therefore split this review into two parts: part one is on the scientific aspects and clarity. My comments here are mainly on clarification and better explanation because the method is a continuation of established works and because the AR6 methodology is fixed and I see a major function of the manuscript to document that methodology. Part two is on usability and replicability: readers should be able to replicate AR6 sea level numbers with the manuscript and the code at hand without being experts in specific high performance computing environments. I tried but failed. (I followed the rather succinct “Quick Start” documentation.) I propose improvements to be made to the manuscript and to the code to overcome this. Only if I (as an example user) succeed in a "replication of the AR6 approach entirely within FACTS " (stated in line 72-73) the work can reach its full potential and follow its aspiration to become a "larger-scale community project" (line 505).
Part 1:
I have four points that need more clarity in my view.1) It is not straightforward to understand how the IPCC AR6 numbers are derived from FACTS. It is described in L411ff, which is already part of the discussion section. The manuscript would profit to state upfront how the AR6 numbers are constructed within the manuscript, for example as part of sec 3.3 or as a separate section. I also advocate for stating the AR6 numbers directly within the manuscript (i.e. within tables 3 and 4), which would make comparison easier. For now I find close correspondence, but no replication of IPCC AR6 numbers (from WG1 Table 9.9). For replication I would expect the numbers to match. If not, I would at least expect a paragraph where the numbers are related and differences justified. Ideally the setup for AR6 replication (a "cookbook") would be prepared within the codebase so that the user does not have to manually infer the settings from the manuscript.
2) VLM is now recognized as a key driver of relative sea level rise and thus impacts (i.e. Nicholls et al, 2021), but it does not get the necessary attention in the manuscript.
a) VLM estimation is based on Kopp et al. 2014, which uses a Gaussian process model to fit historical tide gauge data. The step from fitting to tide gauge data (yielding spatial fields of relative sea level as output) to estimating the VLM component is not clear to me even after reading the SI of Kopp et al. 2014. This needs additional explanation.
b) To my knowledge the approach does not involve direct observations of VLM (i.e. GNSS), where much progress has been seen for VLM estimation. For example, involving such measurements to correct tide gauge measurements for VLM crucially helped to close the sea level budget (Frederikse et al 2020). Can you justify the default choice of Kopp 2014?
c) Changes in contemporary ice mass loading affect not only the ocean water distribution (which I see included through the GRD fingerprints), but also VLM. GRD fingerprints are mentioned for ice sheets (l222), but the reference is more than 20 years old and it is not clear if newer works, especially these of Thomas Frederikse (2017, 2019, 2020) are represented and how and if they affect the VLM estimates of the presented work.
d) VLM is independent of future warming, which could be said clearer (currently referred to as "constant trend", l285) and also stated as a caveat: future ice mass loss is scenario dependent and will influence the VLM rate but it is not implemented in FACTS.3) It is not clear to which baseline the individual contributions are referenced to. Though the authors mention Gregory et al. 2019, which did an excellent job on clarifying terminology, the reference frame is not stated explicitly for each component. The manuscript would profit from such explicit statements. Is the FACTS regional relative sea level rise N15 in Gregory et al. 2019? Are the components in the geocentric reference frame? Clarification on the reference frame will help scientists to add new modules.
4) FACTS only works for the standard RCP/SSP scenarios except for workflow 1e if I understand it correctly. This is different to sea level emulators and I understood it only late in the text. This should be made more prominent so readers can better contextualise this work.
Detailed comments (part 1):
l15-l18: can we include a non-US reference as well.
l29: introduce relative sea level rise and its definition (e.g Gregory et al. 2019)
l34: is relative sea level changes here the right term? So did Mitrovica already look into VLM influenced by West Antarctic ice loss or is it only about water mass redistribution?
l38: this paragraph does not mention vertical land motion, a key component of local relative sea level rise.
l47: “a single probability distribution”l111: a word missing after MPI/OpenMP?
l109-l142: Why this detailed description of RADICAL-Cybertools? It distracts from the story and does not help to get the code running. I would revise and shorten this and describe the environment in terminology understandable to sea level and climate scientists. See also part 2 of the review.l145/Figure1: what do the abbreviations like WF1e in the integration and extreme sea level step mean? They are not explained here.
l167: “bring this formerly offline simulation within FACTS” is not good to understand. Please reformulate for clarity.
l170: “demonstrate the ability”
l185: “an additional basal ice shelf melting” can this be more concrete with a number?
l187: convoluted→convolved?
l187-192: I would reorder the sentences so the order represents the causal chain from global mean temperature change to ice loss. As of now a bit hard to follow.
l198ff: it is not clear to me from this paragraph if the authors implement a method already present in the AR5 or if they create a method in FACTS to capture the numbers of 2005-2010 observed and 2100 projected ice loss of the AR5.
l204: “a negative rate is added”: can you say this more precisely?
l219: appled→ applied
l219 “were applied in the context of the corresponding” is not clear. Do you mean “the RCP scenario projections were treated as SSP scenario projections”?L220: fingerprints precomputed, do they include the ocean bottom deformation part?
l221: “includes”→“include”
l221ff: do the GRD fingerprints only influence the geocentric part of relative sea level rise or also VLM?
l227: “of 2015-2100 glacier loss” or similar;
L227: which RCP scenarios are used?
l233: fI(t)^p: using f for the parameter here is confusing, it reads like a function, maybe write
"f x I(t)" or choose another parameter name.
l235: “a set of glacier models”: can you be more specific?
l233ff: if readers do not cross this paragraph, they do not understand that the method used in the AR6 is named ipccar5, but uses an updated calibration.
l246ff: are these GRD fingerprints?
l251: what does “tlm/” abbreviate?
l259: where is the dedrifting and regridding documented to reproduce the work?
l264: “is then projected”
l266: “projects global mean thermosteric sea-level rise, taking as input ... global mean thermosteric sea-level rise …” is confusing to read. I suggest to revise this sentence.
l283ff: learning here that all earlier described components do not include VLM, so they do not output relative rise. It would be good to make this explicit before. It would be also good to say on which reference system all the other components work.
L285: "constant trend": this means that future VLM is independent of future warming. Good to say this more explicitly.
l307: “Below the support” is hard to understand. Rephrase.
l314: “, with the substitution …” this part of the sentence is hard to follow. Rephrase.
l323/Table1: module names ipccar5/ and ipccar6/ suggest that these are the ones used in the respective IPCC reports and the others not, but this is not the case following the text. This should be made clear in the caption.
l327: can we give a more precise ref than just AR6? is it the unshaded cells in Table 9.9?. fullstop missing after the reference.
l329/Table2: can we mark here which are the workflows for IPCC AR6 projections?
l331: is it the shaded last row of Table 9.9 AR6 WG1? Please reference.
l344/45: this means this is not a full emulator as FACTS cannot map global mean temperature to sea level rise. Depending on modules it is restricted to RCP/SSP scenarios
L346/Table3: the reader is left alone how these numbers compare to AR6. It would help to present the AR6 numbers again in Table 3/4 and discuss deviations in a paragraph.
l357: cm→m
l358: what are “Workflow pairs”?
L372ff: please add a reference or an explanation of how the projection variance and interaction terms are calculated.
l405: explain TE
L410: indeed I would see it as a major aim of the manuscript to replicate the main AR6 SLR projections.
L417: why the difference in how likely ranges are defined in this study compared to the rest of the IPCC AR6? Can the motivation be stated?.
L450: with some caveats: can you detail how this was translated?
L474: "... for glaciers". I am not sure if this is generally true. Also glaciers have different timings of mass loss and disappearance in different world areas.Part 2:
The authors state in I71 that "FACTS 1.0 allows replication of the AR6 approach entirely
within FACTS", but I did not manage to make the code work on our computers. A main hurdle is the EnTK framework, which seems to be a specific framework only installed on certain supercomputers. Making this the default option to run FACTS hinders most scientists from replicating the work. I would therefore advise to change the default option for running FACTS to something generic many scientists are accustomed to. The authors provide a blueprint for such an option via a shell-script. I recommend making this the default option, or provide an alternative approach that can be used to reproduce AR6 numbers. In any case, the code should be cleared of hardcoded paths (e.g. using one configuration file shared across modules, or one per module with a consistent format across modules) and the authors should better describe how R should be installed to make the land ice emulators work. Also provide a description how to reproduce the AR6 numbers within the code/Readme. Ideally FACTS would in addition be provided as a package and could be installed using the usual tools (pip install or similar).Concerning the manuscript, a clear reference to the AR6 numbers facts aims to replicate is missing. I expect these are Table 9.9 AR6 WG1. One solution would be to add them to Table 3 and 4 of the manuscript for direct comparison. I also recommend to provide computational cost per module (CPUh or similar) so potential users of the framework can judge if installation of the EnTK framework is necessary.
Detailed comments (part 2):
The code base includes a large number of dependencies, including heavy dependence on R. The authors provide some guidance in how to install the dependency, but it appears set up for their specific system, and not particularly user-friendly for the larger scientific community. In particular, library location and work directory are hard-coded in files disseminated throughout the project (in individual modules), as opposed to clearly indicated in a centralised configuration file.
For instance, to run the config provided in the doc:
cp -r experiments/coupling.ssp585/config.yml test
python3 runFACTS.py testfirst fails because of missing files in modules/emulandice/shared (emulandice_1.1.0.tar.gz and emulandice_bundled_dependencies.tgz). To produce them, it was necessary to set up a local R environment. This required among other things to edit:
modules/emulandice/shared/emulandice_environment.sh : the line “module use /projects/community/modulefiles” had to be commented out.
modules/emulandice/shared/emulandice_bundle_dependencies.R:
packrat::set_opts(local.repos = c("/projects/community/R3.6_lib_workshop","."))
packrat::install_local('cli')
…Needed to be replaced with more traditional:
install.packages('cli')
The authors did provide a README file in that directory with the mention:
“You will likely need to customize emulandice_environment.sh and emulandice_bundle_dependencies.R based on your local environment.”
But we recommend the default to be setup for generic linux system, and “customization” reserved for use on the authors HPC, instead of (currently) the opposite.The EnTK framework is the largest hurdle for use in the wider scientific community. The welcomed, alternative –shellscript option is experimental. We identify it as the main area to improve in order to disseminate the work.
runFACTS.py (issue with the EnTK framework)
The Mongo DB Server installation was smooth following the instructions provided by the authors. However, we quickly ran into issues with their EnTK framework when following the documentation:
python3 runFACTS.py experiments/dummy
“radical.entk.exceptions.EnTKError: Shell on target host failed: Cannot use new prompt,parsing failed”
The authors offer an alternative (https://fact-sealevel.readthedocs.io/en/latest/quickstart.html#testing-a-module-with-a-shell-script) where the code produces a shell script to bypass the EnTK framework, but with a strong disclaimer ("Performance is not guaranteed, and multi-module experiments are very likely not to work without customization.").
I tested the dummy setup and had to make minor modifications to runFACTS.py:
- print(' WORKDIR=/scratch/`whoami`/test.`date +%s`')
+ print(' WORKDIR=local_scratch/`whoami`/test.`date +%s`')- print(' OUTPUTDIR=/scratch/`whoami`/test.`date +%s`/output')
+ print(' OUTPUTDIR=local_scratch/`whoami`/test.`date +%s`/output')And create the local_scratch folder.
Then:
python3 runFACTS.py --shellscript experiments/dummy > test_dummy.sh
source test_dummy.shran without error, but also produced no output.
I then tried the other configuration file indicated in the documentation, again with the –shellscript option:
mkdir test
cp -r experiments/coupling.ssp585/config.yml test
python3 runFACTS.py --shellscript > test_coupling.sh
source test_coupling.shI ran into issues related to library installation and hard-coded paths as described in the previous section. Once overcome, the script ran but new error messages appeared:
> cp: cannot create regular file 'local_scratch/reviewer/test.1681896393/output': No such file or directory
> cp: target 'local_scratch/reviewer/test.1681896393/test.GrIS1f.FittedISMIP.GrIS' is not a directory(local_scratch is a local folder I created to replace the hard-coded, and authors-specific architecture /scratch)
Given the disclaimer provided by the authors on the experimental nature of the –shellscript option, I did not attempt to to run this further.
ReferencesFrederikse, T., Riva, R. E., & King, M. A. (2017). Ocean bottom deformation due to present‐day mass redistribution and its impact on sea level observations. Geophysical Research Letters, 44(24), 12-306.
Frederikse, T., Landerer, F. W., & Caron, L. (2019). The imprints of contemporary mass redistribution on local sea level and vertical land motion observations. Solid Earth, 10(6), 1971-1987.
Frederikse, T., Landerer, F., Caron, L., Adhikari, S., Parkes, D., Humphrey, V. W., ... & Wu, Y. H. (2020). The causes of sea-level rise since 1900. Nature, 584(7821), 393-397.
Nicholls, R. J., Lincke, D., Hinkel, J., Brown, S., Vafeidis, A. T., Meyssignac, B., ... & Fang, J. (2021). A global analysis of subsidence, relative sea-level change and coastal flood exposure. Nature Climate Change, 11(4), 338-342.
Citation: https://doi.org/10.5194/egusphere-2023-14-RC1 -
AC4: 'Reply on RC1', Robert Kopp, 16 May 2023
We thank the reviewer for their comments. Please see detailed response attached.
Regarding the reviewer's challenges installing FACTS, we were not able to replicate all of them. However, we have made a few changes intended to simplify the installation of FACTS 1.0.0. Most notably, we have adopted more generally applicable (though slower) defaults for the R setup in the emulandice module, added details of the set up of this module to the Quick Start instructions, and created a script (vm_factsenvsetup.sh) that is successfully able to set up FACTS within a vanilla Ubuntu Focal virtual machine.
-
AC4: 'Reply on RC1', Robert Kopp, 16 May 2023
-
RC2: 'Comment on egusphere-2023-14', Luke Jackson, 15 Jun 2023
General Comments
This paper outlines a modular platform designed to harmonise and internally calculate (tidal-datum-epoch) mean sea-level contributions from all major global and local sea-level components that are dependent upon a climate model emulator, and post-processed to localise and applied to extreme water-levels. The modular structure enables different combinations of sea-level component emulators/datasets within a fully probabilistic framework to be enabled thus accounting for aleatory and epistemic uncertainty. The GSL and New York example demonstrate the utility of the framework effectively. Scientifically, the framework benefits from more than a decade of research that exploits the budgetary approach to probabilistic sea-level change developed by numerous researchers. The framework also shows great potential and flexibility – I hope this will become an evolving resource that the community can utilise in future.
Overall, this is a welcome piece of work to the research community. It is carefully written and, in most places, clear to follow. There are a number of places where additional detail is required, ideally an additional case study would be shown, and a few Figures/Tables need updating.
Specific Comments
The Introduction has a strong IPCC focus. While this provides context, additional references to key work is important – particularly the development of a budgetary approach to SL, which is essential to the process-based method of projection.
The choice of Workflows presented focuses specifically on combinations of AIS/GrIS emulations/outputs. The results certainly clarify the IPCC decision-making process (medium/low confidence) but showing a more diverse selection of modules would showcase the FACTs framework more effectively as an assessment tool (e.g., VLM). Adding a separate city-level case study where VLM is focused upon (in addition to the current NYC example showcasing ice sheet combinations) would be valuable.
The issue of a common reference timescale is only partially addressed. This is pertinent given your mention of IPCC AR5 (ref timescale 1986-2005) and some sea-level components (e.g., GrIS SMB Fettweis et al. 2013) that are relative to an alternative baseline (e.g., ~1970s). Highlighting this earlier and how this is dealt with (either as a post-processing step or module specific step) is very important. Likewise, you need to explain how/if this timescale can be user defined.
Vertical Land Motion needs additional detail, and the issue of double counting needs to be addressed in the framework. If the barystatic fingerprints to be scaled use RSL, then they will contain a VLM component (as does the GIA RSL fingerprint). Scaling and summation of SL components within the third “Step” of the framework, would then induce possible double counting if the VLM component has not been corrected for this (arguable small) component. This would be true of K14 VLM or NZ VLM, as you allude to in the discussion.
Technical Corrections
Line 5: rephrase “a modular … sea-level rise”, needs to allude to individual SL components to generate global, regional, ESL projections.
Line 35-37: Highlight earlier work of Slangen et al. (2012) that led into AR5.
Line 43: “They also … core elements” – rephrase it is not clear what you mean here.
Line 44: “these studies” – which studies? No reference to specifics or examples.
Line 44: K14 and K17 do not refer specifically to “ProjectSL/LocalizeSL Framework” despite these being associated with the coded release. Rephrase to be more general and highlight additional frameworks beyond K14/K17.
Line 47: “This assumption …” rephrase for grammar, and an example (ideally non-SL to aid the reader) would be useful.
Line 49: remove “in”, and add “different components” to “different sea-level components”
Line 69: Slangen et al. in press – now published – update
Line 82: Terminology “Steps” versus “step” – is there a different between these?
Line 95: “total” – you can only refer to total if all SL components at an appropriate scale are accounted for.
Line 103: “combined” – the notion of combining workflows needs articulating – this is not explained in the examples either – do you mean something like Sweet et al. (2022)?
Line 113: “RADICAL-SAGA” is not discussed in the manuscript – what is it’s purpose?
Line 124: “exposes” – what do you mean by this?
Line 124/5: “Those constructs … tasks” is effectively repeated on line 128-129 – consider merging for clarity
Line 125: Terminology “Ensemble” appears here then isn’t mentioned elsewhere.
Line 131: “failures of Tasks …” – what about failed Stages or Pipelines – how does the framework deal with these higher-level issues?
Line 137-140: RADICAL Tools, their roles, Pipeline, Stage and Task need to be included within the schematic diagram (Figure 1) to better communicate section 2.2
Figure 1: see comment above. Including the 7 workflows discussed is of limited value if this is a conceptual diagram – a number of “example” workflows showcasing Task, stage and Pipelines would serve better.
Line 148: “emissions scenario as input” – this is the only input/variability that is feasible within this module? For example, how many ocean layers are used (is it a default FAIR setup?)?
Section 2.3.2 would benefit from each sea-level component being within a separate subsection (e.g., Section 2.3.2.1 Generic) rather than an italicised header that is inline with the main text.
Line 157: facts/directsample : does not appear in Figure 1
Line 165: FAIR 1.0 – should this be FACTS1.0 ?
Line 168/169: “The emulandice …” new paragraph
Line 170: “simulations and demonstrates” to “simulations. These demonstrate”
Line 184: does Levermann et al. 2020 refer to the original code or the modification to speed it up? If the former then move the citation prior to the comma in Line 183.
Line 196: “fixed rate of mass loss” – does this rate refer to basal melting or SMB or both?
Line 196/197: “This assumption, … underestimate …” Does it? By how much?
Line 214: “through” to “to”
Line 215/216: “In IPCC AR6 … projections” What do you mean by “applied in the context of” – this is ambiguous.
Line 219: “were appled” (spelling) and rephrase – sentence reads oddly – perhaps alternative wording to “applied”
Line 221: “includes” to “include”
Line 221: “regional scaling” to “regionalisation of their global SL equivalent contribution” (or similar)
Line 224: needs more information on the fingerprints (e.g., all barystatic components – grouped or separate or both such as AIS or WAIS, APIS, EAIS or by-sector, realistic (observed) or uniform, range of earth properties or not?)
Line 239/240: Remove “(In this manuscript, …)”
Line 257: “(Fox-Kemper et al., 2021))” to “Fox-Kemper et al., (2021)”
Line 276: “The model-based …” – not clear if this relates to module or K14 method
Line 288/289: “Sensitivity tests …” – are these published somewhere? If not, then inclusion within an Appendix would be very useful.
Line 290: “The spatiotemporal model” – new paragraph but I assume you are talking about VLM? Needs restating.
Line 293: “tuned” – what do you mean by this? Is this a scaling procedure/process to minimise misfit to observations?
Line 295-297: much more detail on NZInsar module needed as vertical reference frame of InSAR/GPS differs from RSL benchmarks regionally. The way rates etc from this module are harmonised is important to explain. Likewise, direct VLM observation will contain components of VLM from each barystatic SL component and GIA – how are these accounted for (prior or post Step 3)?
Line 306: “Annual means … stage” place this before fitting sentence to order processing steps correctly.
Line 308: “Below the … (Buchanan et al., 2016).” I didn’t understand this sentence – sorry – rephrase
Line 311: Need a statement here that the dynamic evolution of ESL (due to changing atmos-ocean clim. Variability) is not accounted for in the current framework.
Line 313: Remind the reader of Workflows here.
Line 333: “pseudo-random” – what do you mean by this?
Line 338: temperature should include a ° symbol
Line 358: “cm” to “m”
Line 370: stay consistent with units – switch to m
Table 3: add GSAT into title; rewrite footer for consistency so that T and GMSL are clearly identified with different baselines rather than using “except”; Use an asterisk to refer to specific modules that use complimentary RCP rather than the SSP, and refer to this in the footer – reader doesn’t have to go hunting elsewhere in the text.
Figure 3: Present time to 2100 rather than 2150 so that it is consistent with other Figures. Since you are not showing the uncertainty change shape after 2100 the thick/thin bars could mislead the reader; caption spelling mistake “think” to “thin”.
Line 457-464: Missing detail on GIA (as distinct from VLM), such as GIA uncertainty (e.g., Melini & Spada, 2019) and present-day GRD uncertainty (e.g., Cederberg et al. 2023).
Line 462: “scenarios” – should mention non-linear VLM behaviour too.
Line 478-484: See Specific Comment on common timescales – emphasis on compatibility of component-based timescale including use of observation/projection-data to supplement historical runs to bridge the gap between modelling intercomparison projects remains important.
Citation: https://doi.org/10.5194/egusphere-2023-14-RC2 -
AC5: 'Reply on RC2', Robert Kopp, 26 Jun 2023
We thank Dr. Jackson's for their detailed and helpful comments. We respond to his specific comments here, and will address his technical comments in the revision of the manuscript.
- The Introduction has a strong IPCC focus. While this provides context, additional references to key work is important – particularly the development of a budgetary approach to SL, which is essential to the process-based method of projection.
Per the reviewer’s suggestion under Technical Corrections, we have added a reference to Slangen et al. (2012) in the narrative leading up to AR5. We have also clarified that the reference to Fox-Kemper et al. (2021) describing the “numerous subsequent studies” is to the overview in section 9.6.3.1. We have also added: “Examples of open-source probabilistic sea-level projection frameworks include the ProjectSL/LocalizeSL framework (Kopp and Rasmussen, 2021) developed by Kopp et al. (2014, 2017), and BRICK (Wong et al., 2017). Additional studies present probabilistic RSL projection methodologies without associated open-source software releases (e.g., Slangen et al., 2014; Grinsted et al., 2015; Jackson and Jevrejeva, 2016; Le Cozannet et al., 2019; Palmer et al., 2020).”
- The choice of Workflows presented focuses specifically on combinations of AIS/GrIS emulations/outputs. The results certainly clarify the IPCC decision-making process (medium/low confidence) but showing a more diverse selection of modules would showcase the FACTs framework more effectively as an assessment tool (e.g., VLM). Adding a separate city-level case study where VLM is focused upon (in addition to the current NYC example showcasing ice sheet combinations) would be valuable.
We appreciate the reviewer’s suggestion regarding the inclusion of a VLM-focused case study, but the emphasis in this manuscript is on the distinctive approach FACTS takes to allow exploration of structural uncertainty. It provides an accurate representation of the current state of the code, which was strongly influenced by the needs of AR6. It is possible for users building off of FACTS develop additional modules, but at the moment we have only one VLM module (kopp2014/verticallandmotion) with global coverage.
We have clarified that the alternative VLM module (NZInsarGPS/verticallandmotion) simply directly samples gridded land motion data from an external file. We note that, in Naish et al. (2022), this module applies a gridded data file describing rates of land motion inferred from interferometric synthetic aperture radar (InSAR) data. We do not add an additional case study, but some of the challenges in thinking about appropriate VLM projections to incorporate into RSL projections are described in Naish et al. (2022, https://doi.org/10.1002/essoar.10511878.1).
- The issue of a common reference timescale is only partially addressed. This is pertinent given your mention of IPCC AR5 (ref timescale 1986-2005) and some sea-level components (e.g., GrIS SMB Fettweis et al. 2013) that are relative to an alternative baseline (e.g., ~1970s). Highlighting this earlier and how this is dealt with (either as a post-processing step or module specific step) is very important. Likewise, you need to explain how/if this timescale can be user defined.
This is a convention that must be consistent across modules. It is standard for modules to have ‘baseyear’ as a parameter (for AR6 examples, this is set to 2005, the midpoint of the 1995-2014 reference period).
We now note: “Configuration options such as the number of samples to run, the time points at which calculations are reported, and the reference period used for output can be globally specified but are implemented on a module-by-module basis.” In the discussion of directions for improvement, we state “The existing FACTS modules start projections in the 21st century.” This is less specific than previously, as – while 2005 (1995-2014) is used as the center point for the results presented here – most of the modules can handle a centerpoint at any time after the year 2000 without problems. The point of the discussion is the absence of historical data, not the way in which reference periods are handled.
- Vertical Land Motion needs additional detail, and the issue of double counting needs to be addressed in the framework. If the barystatic fingerprints to be scaled use RSL, then they will contain a VLM component (as does the GIA RSL fingerprint). Scaling and summation of SL components within the third “Step” of the framework, would then induce possible double counting if the VLM component has not been corrected for this (arguable small) component. This would be true of K14 VLM or NZ VLM, as you allude to in the discussion.
This is addressed in the response to reviewer 1. We have clarified that we use the term ‘vertical land motion’ in the previous draft to refer to long-term vertical land motion, of the sort reflected in century-scale analysis of tide-gauge data. Elastic deformation associated with contemporary land-ice and land-water mass redistribution is accounted for in the RSL projections via the static GRD fingerprints. For the K14 module, double-counting is an issue only to the extent that there are substantial 20th century trends in deformation associated with century-scale barystatic effects. Given the magnitude of 20th century average barystatic trends (order 1.0 mm/yr), this effect will introduce a bias in most places of <±0.2 mm/yr, or 2 cm/century. This is quite small relative to other sources of uncertainty, and comparable to sampling issues.
Double counting with respect to GIA Is not an issue for the K14 method; this is taken into account in the methodology.
Citation: https://doi.org/10.5194/egusphere-2023-14-AC5
-
AC5: 'Reply on RC2', Robert Kopp, 26 Jun 2023
-
EC1: 'Invitation to proceed to revisions', Andrew Wickert, 15 Jun 2023
Dear Dr. Kopp and co-authors,
Based on the comments received, including the constructive criticism of both referees, I encourage you to proceed to drafting a response to the reviewer comments and a revised manuscript.
WIth good wishes,
Andy
Citation: https://doi.org/10.5194/egusphere-2023-14-EC1
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2023-14', Benjamin Grandey, 03 Feb 2023
The authors provide a clear description and helpful discussion of a useful tool, FACTS.
I have a few minor comments and questions for the authors' consideration.
L53: The acronym FAIR is used in two senses: (i) "Findable, Accessible, Interoperable, Reusuable" (only once, at L53), and (ii) the FAIR climate model. Only the first of these is defined. To avoid potential confusion, could the acronym be reserved for the FAIR climate model?
L165: Should "FAIR 1.0" not be "FACTS 1.0"?
L250-252: When calculating thermosteric sea level rise, the authors state that the tlm/sterodynamics module uses ocean heat content (OHC) from the fair/temperature module, following Fox-Kemper et al. (2021). Does FAIR produce the OHC data, or is the climate simulation step bypassed (L150)? I understand that Fox-Kemper et al. (2021) used OHC from a two-layer energy balance model. Have I misunderstood something?
Table 1: Does every module sample a distribution? Or do some modules produce only a single time series? In particular, I am a little confused by the nature of the output from the land water storage and vertical land motion modules (L271-297).
L345: Should "Table 2" not be "Table 1"?
L375-378: I understand that Bamber et al. (2019) sought to account for dependence between the ice sheet components. Is this dependence preserved in workflow 4? Does this explain the large positive interaction evident in the early decades of workflow 4 (Fig. 4 bottom row)? If so, why does this positive interaction decrease in later decades?
Figures 2, 3, and 5 captions: Should "think" not be "thin"?
Citation: https://doi.org/10.5194/egusphere-2023-14-CC1 -
AC1: 'Reply on CC1', Robert Kopp, 16 May 2023
- L53: The acronym FAIR is used in two senses: (i) "Findable, Accessible, Interoperable, Reusuable" (only once, at L53), and (ii) the FAIR climate model. Only the first of these is defined. To avoid potential confusion, could the acronym be reserved for the FAIR climate model?
Thank you for the suggestion. We will replace the reference to “FAIR science” to “open science”. We have also adjusted the acronym used for the FaIR climate model to have a lowercase ‘a’, which has become the preferred usage.
- L165: Should "FAIR 1.0" not be "FACTS 1.0"?
You are correct. Thank you for correcting the error.
- L250-252: When calculating thermosteric sea level rise, the authors state that the tlm/sterodynamics module uses ocean heat content (OHC) from the fair/temperature module, following Fox-Kemper et al. (2021). Does FAIR produce the OHC data, or is the climate simulation step bypassed (L150)? I understand that Fox-Kemper et al. (2021) used OHC from a two-layer energy balance model. Have I misunderstood something?
The two-layer energy balance model is incorporated into FaIR as an alternative representation of the forcing-temperature coupling, based on Geoffroy et al. (2013). (See https://docs.fairmodel.net/en/v1.6.3/examples.html#geoffroy-temperature-function) . We will add a parenthetical comment in the text to this effect.
- Table 1: Does every module sample a distribution? Or do some modules produce only a single time series? In particular, I am a little confused by the nature of the output from the land water storage and vertical land motion modules (L271-297).
Yes, every module produces a distribution. For land water storage, we will note: “Uncertainty in the projections is generated by sampling the parameters of the sigmoidal fit for reservoir storage and linear fit for groundwater depletion.” For VLM: “Uncertainty in the projections is generated based on the uncertainty in the estimate of the constant trend.”
- L345: Should "Table 2" not be "Table 1"?
Yes, thank you.
- L375-378: I understand that Bamber et al. (2019) sought to account for dependence between the ice sheet components. Is this dependence preserved in workflow 4? Does this explain the large positive interaction evident in the early decades of workflow 4 (Fig. 4 bottom row)? If so, why does this positive interaction decrease in later decades?
The samples used in workflow 4 for the ice sheets are directly taken from those reported by Bamber et al. (2019), preserving all correlations therein. While a deep dive into the correlations is beyond the scope of this manuscript, it is by construction consistent with the Monte Carlo samples of ice sheets generated in Bamber et al 2019, because the samples are used directly as a static input. Bear in mind that (1) the absolute variance early in the 21st century are quite small, and (2) correlations in Bamber et al. (2019) were not elicited early in the 21st century, so I would read too much into the high positive correlation in the early years of the decade – the import of this correlation is small, given the small overall variance.
- Figures 2, 3, and 5 captions: Should "think" not be "thin"?
Yes, thank you.
Citation: https://doi.org/10.5194/egusphere-2023-14-AC1
-
AC1: 'Reply on CC1', Robert Kopp, 16 May 2023
-
CC2: 'Clarifiation about the method to project ODSL', Dewi Le Bars, 06 Feb 2023
Dear authors,
Thank you for this effort to clearly describe this framework and to make the code openly available. This work is of great value to the community.
I do not understand the description of the method for ODSL in l.255-258:
“The resulting global mean thermosteric sea-level rise is then combined with ocean dynamic sea-level change and the inverse barometer effect using the gridded output of CMIP6 models (see the right column of Table A1 for the models that were used in (Fox-Kemper et al., 2021)), based on the time-varying correlation structure between global mean thermal expansion and ocean dynamic sea-level change in the multi-model ensemble.”I don’t think that a correlations structure is enough information to reconstruct ODSL from global mean thermal expansion. Is this method similar to Palmer et al. 2020? (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019EF001413) There, it is described as a linear regression but instead of being between global mean thermal expansion and ODSL it is between global mean thermal expansion and sterodynamic sea-level change:
“Following previous studies (Bilbao et al., 2015; Palmer, Howard, et al., 2018; Perrette et al., 2013), the effects of local changes in ocean density and circulation are included by establishing regression relationships between global thermal expansion and local sterodynamic sea-level change in CMIP5 climate model simulations”Your clarifications on this point would be very much appreciated. In some places the assumption of linear regression between global mean thermal expansion and ocean dynamic sea-level change is not very accurate, therefore making this assumption clearer would help people decide if this framework works for them or not.
Thank you,
Dewi Le BarsCitation: https://doi.org/10.5194/egusphere-2023-14-CC2 -
AC2: 'Reply on CC2', Robert Kopp, 16 May 2023
Dear Dewi,
Thank you for the question. We will add an appendix to the following effect:
The method used is a modification of that described in Kopp et al. (2014). Global mean thermal expansion projections are generated from the two-layer model. Ocean dynamic sea level is assumed to have a degree of correlation with global mean thermal expansion, with the correlation assessed on a grid-cell basis based on the CMIP6 ensemble for a particular SSP scenario. Given a sample of 19-year-average global mean thermal expansion y at a particular point in time, 19-year-average ocean dynamic sea level z is taken as distributed following a t-distribution with a conditional mean of
z'(r) + σ(r) k(r) (y - y' )/s
And a conditional standard deviation proportional to
σ(r) (1-k(r)2)
Where z'(r) is the multimodel mean ocean dynamic sea level at location r, σ(r) is the multimodel standard deviation, k(r) is the correlation between global mean thermal expansion and ocean dynamic sea level z(r), y' is the multimodal mean global mean thermal expansion and s is the multimodel standard deviation of global mean thermal expansion. The standard deviation is inflated relative to that of the ensemble to account for the expert judgment that the 5-95th percentile of the ensemble may have as much as a 33% of being exceeded on either end (ie the 5-95th percentile range is treated as a likely range). Though the parameters of this regression model are re-fit for each time point, correlation across time is preserved (perhaps excessively) in sampling by drawing (via Latin hypercube sampling) a single quantile of the variance characterized by the conditional standard deviation to use at all time points for a given time series sample. In sampling the t-distribution, the number of degrees of freedom is taken as the number of GCMs providing DSL projections for a particular grid cell in the scenario used for calibration.
In some ways, the approach is similar to that of a linear-regression based scaling of ocean dynamic sea level on global mean thermal expansion, as in Palmer et al. (2020). The commonality is the assumption that the distribution of ocean dynamic sea level at a given point may be constrained by information about global mean thermal expansion. (“May” is an operative word here — it is also possible for the scaling factor or correlation coefficient to be zero).
One important difference is that this approach is recalibrated for each time step, whereas the Palmer et al. approach finds a single regression coefficient for a given GCM across time. A second is the uncertainty not captured in the characterized correlation is sampled, whereas in Palmer et al., all variance is assumed to be captured by the spread of regression coefficients across GCMs. The approach used here is more focused on the distributional characteristics across GCMs, as opposed to representing each individual GCM by a regression coefficient. As a consequence of these differences, the Kopp et al. (2014) approach loses a degree of traceability to individual GCMs, being instead focused on preserving the distributional properties assessed based on the ensemble.
Note that where thermal expansion and ocean dynamic sea level are uncorrelated, this approach returns simply the multimodel mean and scaled standard deviation for the scenario.
Best,
The authors
Citation: https://doi.org/10.5194/egusphere-2023-14-AC2
-
AC2: 'Reply on CC2', Robert Kopp, 16 May 2023
-
CC3: 'Comment on egusphere-2023-14', Vanessa Völz, 08 Feb 2023
Very helpful discription of FACTS!
Chapter 3, L334: Aren't there 20,000 instead of 2,000 Monte Carlo samples?
Citation: https://doi.org/10.5194/egusphere-2023-14-CC3 -
AC3: 'Reply on CC3', Robert Kopp, 16 May 2023
Thank you! This manuscript describes FACTS 1.0 and demonstrates it using a set of modules that were developed, in part, to support the AR6 assessment. It does not document the AR6 sea level projections, for which 20,000 Monte Carlo samples were run; the demonstration here uses 2,000 Monte Carlo samples.
Citation: https://doi.org/10.5194/egusphere-2023-14-AC3
-
AC3: 'Reply on CC3', Robert Kopp, 16 May 2023
-
RC1: 'Comment on egusphere-2023-14', Anonymous Referee #1, 25 Apr 2023
The authors present the FACTS framework to probabilistically estimate future sea level rise, globally and regionally. The framework aims to make it possible to seamlessly exchange individual drivers of global sea level rise so structural uncertainty can be explored. The presented work stands out as it underpins several authoritative sea level assessments, of which the most prominent is the IPCC AR6 WG1 assessment.
Due to this special position of FACTS, usability and replicability is a key concern. I therefore split this review into two parts: part one is on the scientific aspects and clarity. My comments here are mainly on clarification and better explanation because the method is a continuation of established works and because the AR6 methodology is fixed and I see a major function of the manuscript to document that methodology. Part two is on usability and replicability: readers should be able to replicate AR6 sea level numbers with the manuscript and the code at hand without being experts in specific high performance computing environments. I tried but failed. (I followed the rather succinct “Quick Start” documentation.) I propose improvements to be made to the manuscript and to the code to overcome this. Only if I (as an example user) succeed in a "replication of the AR6 approach entirely within FACTS " (stated in line 72-73) the work can reach its full potential and follow its aspiration to become a "larger-scale community project" (line 505).
Part 1:
I have four points that need more clarity in my view.1) It is not straightforward to understand how the IPCC AR6 numbers are derived from FACTS. It is described in L411ff, which is already part of the discussion section. The manuscript would profit to state upfront how the AR6 numbers are constructed within the manuscript, for example as part of sec 3.3 or as a separate section. I also advocate for stating the AR6 numbers directly within the manuscript (i.e. within tables 3 and 4), which would make comparison easier. For now I find close correspondence, but no replication of IPCC AR6 numbers (from WG1 Table 9.9). For replication I would expect the numbers to match. If not, I would at least expect a paragraph where the numbers are related and differences justified. Ideally the setup for AR6 replication (a "cookbook") would be prepared within the codebase so that the user does not have to manually infer the settings from the manuscript.
2) VLM is now recognized as a key driver of relative sea level rise and thus impacts (i.e. Nicholls et al, 2021), but it does not get the necessary attention in the manuscript.
a) VLM estimation is based on Kopp et al. 2014, which uses a Gaussian process model to fit historical tide gauge data. The step from fitting to tide gauge data (yielding spatial fields of relative sea level as output) to estimating the VLM component is not clear to me even after reading the SI of Kopp et al. 2014. This needs additional explanation.
b) To my knowledge the approach does not involve direct observations of VLM (i.e. GNSS), where much progress has been seen for VLM estimation. For example, involving such measurements to correct tide gauge measurements for VLM crucially helped to close the sea level budget (Frederikse et al 2020). Can you justify the default choice of Kopp 2014?
c) Changes in contemporary ice mass loading affect not only the ocean water distribution (which I see included through the GRD fingerprints), but also VLM. GRD fingerprints are mentioned for ice sheets (l222), but the reference is more than 20 years old and it is not clear if newer works, especially these of Thomas Frederikse (2017, 2019, 2020) are represented and how and if they affect the VLM estimates of the presented work.
d) VLM is independent of future warming, which could be said clearer (currently referred to as "constant trend", l285) and also stated as a caveat: future ice mass loss is scenario dependent and will influence the VLM rate but it is not implemented in FACTS.3) It is not clear to which baseline the individual contributions are referenced to. Though the authors mention Gregory et al. 2019, which did an excellent job on clarifying terminology, the reference frame is not stated explicitly for each component. The manuscript would profit from such explicit statements. Is the FACTS regional relative sea level rise N15 in Gregory et al. 2019? Are the components in the geocentric reference frame? Clarification on the reference frame will help scientists to add new modules.
4) FACTS only works for the standard RCP/SSP scenarios except for workflow 1e if I understand it correctly. This is different to sea level emulators and I understood it only late in the text. This should be made more prominent so readers can better contextualise this work.
Detailed comments (part 1):
l15-l18: can we include a non-US reference as well.
l29: introduce relative sea level rise and its definition (e.g Gregory et al. 2019)
l34: is relative sea level changes here the right term? So did Mitrovica already look into VLM influenced by West Antarctic ice loss or is it only about water mass redistribution?
l38: this paragraph does not mention vertical land motion, a key component of local relative sea level rise.
l47: “a single probability distribution”l111: a word missing after MPI/OpenMP?
l109-l142: Why this detailed description of RADICAL-Cybertools? It distracts from the story and does not help to get the code running. I would revise and shorten this and describe the environment in terminology understandable to sea level and climate scientists. See also part 2 of the review.l145/Figure1: what do the abbreviations like WF1e in the integration and extreme sea level step mean? They are not explained here.
l167: “bring this formerly offline simulation within FACTS” is not good to understand. Please reformulate for clarity.
l170: “demonstrate the ability”
l185: “an additional basal ice shelf melting” can this be more concrete with a number?
l187: convoluted→convolved?
l187-192: I would reorder the sentences so the order represents the causal chain from global mean temperature change to ice loss. As of now a bit hard to follow.
l198ff: it is not clear to me from this paragraph if the authors implement a method already present in the AR5 or if they create a method in FACTS to capture the numbers of 2005-2010 observed and 2100 projected ice loss of the AR5.
l204: “a negative rate is added”: can you say this more precisely?
l219: appled→ applied
l219 “were applied in the context of the corresponding” is not clear. Do you mean “the RCP scenario projections were treated as SSP scenario projections”?L220: fingerprints precomputed, do they include the ocean bottom deformation part?
l221: “includes”→“include”
l221ff: do the GRD fingerprints only influence the geocentric part of relative sea level rise or also VLM?
l227: “of 2015-2100 glacier loss” or similar;
L227: which RCP scenarios are used?
l233: fI(t)^p: using f for the parameter here is confusing, it reads like a function, maybe write
"f x I(t)" or choose another parameter name.
l235: “a set of glacier models”: can you be more specific?
l233ff: if readers do not cross this paragraph, they do not understand that the method used in the AR6 is named ipccar5, but uses an updated calibration.
l246ff: are these GRD fingerprints?
l251: what does “tlm/” abbreviate?
l259: where is the dedrifting and regridding documented to reproduce the work?
l264: “is then projected”
l266: “projects global mean thermosteric sea-level rise, taking as input ... global mean thermosteric sea-level rise …” is confusing to read. I suggest to revise this sentence.
l283ff: learning here that all earlier described components do not include VLM, so they do not output relative rise. It would be good to make this explicit before. It would be also good to say on which reference system all the other components work.
L285: "constant trend": this means that future VLM is independent of future warming. Good to say this more explicitly.
l307: “Below the support” is hard to understand. Rephrase.
l314: “, with the substitution …” this part of the sentence is hard to follow. Rephrase.
l323/Table1: module names ipccar5/ and ipccar6/ suggest that these are the ones used in the respective IPCC reports and the others not, but this is not the case following the text. This should be made clear in the caption.
l327: can we give a more precise ref than just AR6? is it the unshaded cells in Table 9.9?. fullstop missing after the reference.
l329/Table2: can we mark here which are the workflows for IPCC AR6 projections?
l331: is it the shaded last row of Table 9.9 AR6 WG1? Please reference.
l344/45: this means this is not a full emulator as FACTS cannot map global mean temperature to sea level rise. Depending on modules it is restricted to RCP/SSP scenarios
L346/Table3: the reader is left alone how these numbers compare to AR6. It would help to present the AR6 numbers again in Table 3/4 and discuss deviations in a paragraph.
l357: cm→m
l358: what are “Workflow pairs”?
L372ff: please add a reference or an explanation of how the projection variance and interaction terms are calculated.
l405: explain TE
L410: indeed I would see it as a major aim of the manuscript to replicate the main AR6 SLR projections.
L417: why the difference in how likely ranges are defined in this study compared to the rest of the IPCC AR6? Can the motivation be stated?.
L450: with some caveats: can you detail how this was translated?
L474: "... for glaciers". I am not sure if this is generally true. Also glaciers have different timings of mass loss and disappearance in different world areas.Part 2:
The authors state in I71 that "FACTS 1.0 allows replication of the AR6 approach entirely
within FACTS", but I did not manage to make the code work on our computers. A main hurdle is the EnTK framework, which seems to be a specific framework only installed on certain supercomputers. Making this the default option to run FACTS hinders most scientists from replicating the work. I would therefore advise to change the default option for running FACTS to something generic many scientists are accustomed to. The authors provide a blueprint for such an option via a shell-script. I recommend making this the default option, or provide an alternative approach that can be used to reproduce AR6 numbers. In any case, the code should be cleared of hardcoded paths (e.g. using one configuration file shared across modules, or one per module with a consistent format across modules) and the authors should better describe how R should be installed to make the land ice emulators work. Also provide a description how to reproduce the AR6 numbers within the code/Readme. Ideally FACTS would in addition be provided as a package and could be installed using the usual tools (pip install or similar).Concerning the manuscript, a clear reference to the AR6 numbers facts aims to replicate is missing. I expect these are Table 9.9 AR6 WG1. One solution would be to add them to Table 3 and 4 of the manuscript for direct comparison. I also recommend to provide computational cost per module (CPUh or similar) so potential users of the framework can judge if installation of the EnTK framework is necessary.
Detailed comments (part 2):
The code base includes a large number of dependencies, including heavy dependence on R. The authors provide some guidance in how to install the dependency, but it appears set up for their specific system, and not particularly user-friendly for the larger scientific community. In particular, library location and work directory are hard-coded in files disseminated throughout the project (in individual modules), as opposed to clearly indicated in a centralised configuration file.
For instance, to run the config provided in the doc:
cp -r experiments/coupling.ssp585/config.yml test
python3 runFACTS.py testfirst fails because of missing files in modules/emulandice/shared (emulandice_1.1.0.tar.gz and emulandice_bundled_dependencies.tgz). To produce them, it was necessary to set up a local R environment. This required among other things to edit:
modules/emulandice/shared/emulandice_environment.sh : the line “module use /projects/community/modulefiles” had to be commented out.
modules/emulandice/shared/emulandice_bundle_dependencies.R:
packrat::set_opts(local.repos = c("/projects/community/R3.6_lib_workshop","."))
packrat::install_local('cli')
…Needed to be replaced with more traditional:
install.packages('cli')
The authors did provide a README file in that directory with the mention:
“You will likely need to customize emulandice_environment.sh and emulandice_bundle_dependencies.R based on your local environment.”
But we recommend the default to be setup for generic linux system, and “customization” reserved for use on the authors HPC, instead of (currently) the opposite.The EnTK framework is the largest hurdle for use in the wider scientific community. The welcomed, alternative –shellscript option is experimental. We identify it as the main area to improve in order to disseminate the work.
runFACTS.py (issue with the EnTK framework)
The Mongo DB Server installation was smooth following the instructions provided by the authors. However, we quickly ran into issues with their EnTK framework when following the documentation:
python3 runFACTS.py experiments/dummy
“radical.entk.exceptions.EnTKError: Shell on target host failed: Cannot use new prompt,parsing failed”
The authors offer an alternative (https://fact-sealevel.readthedocs.io/en/latest/quickstart.html#testing-a-module-with-a-shell-script) where the code produces a shell script to bypass the EnTK framework, but with a strong disclaimer ("Performance is not guaranteed, and multi-module experiments are very likely not to work without customization.").
I tested the dummy setup and had to make minor modifications to runFACTS.py:
- print(' WORKDIR=/scratch/`whoami`/test.`date +%s`')
+ print(' WORKDIR=local_scratch/`whoami`/test.`date +%s`')- print(' OUTPUTDIR=/scratch/`whoami`/test.`date +%s`/output')
+ print(' OUTPUTDIR=local_scratch/`whoami`/test.`date +%s`/output')And create the local_scratch folder.
Then:
python3 runFACTS.py --shellscript experiments/dummy > test_dummy.sh
source test_dummy.shran without error, but also produced no output.
I then tried the other configuration file indicated in the documentation, again with the –shellscript option:
mkdir test
cp -r experiments/coupling.ssp585/config.yml test
python3 runFACTS.py --shellscript > test_coupling.sh
source test_coupling.shI ran into issues related to library installation and hard-coded paths as described in the previous section. Once overcome, the script ran but new error messages appeared:
> cp: cannot create regular file 'local_scratch/reviewer/test.1681896393/output': No such file or directory
> cp: target 'local_scratch/reviewer/test.1681896393/test.GrIS1f.FittedISMIP.GrIS' is not a directory(local_scratch is a local folder I created to replace the hard-coded, and authors-specific architecture /scratch)
Given the disclaimer provided by the authors on the experimental nature of the –shellscript option, I did not attempt to to run this further.
ReferencesFrederikse, T., Riva, R. E., & King, M. A. (2017). Ocean bottom deformation due to present‐day mass redistribution and its impact on sea level observations. Geophysical Research Letters, 44(24), 12-306.
Frederikse, T., Landerer, F. W., & Caron, L. (2019). The imprints of contemporary mass redistribution on local sea level and vertical land motion observations. Solid Earth, 10(6), 1971-1987.
Frederikse, T., Landerer, F., Caron, L., Adhikari, S., Parkes, D., Humphrey, V. W., ... & Wu, Y. H. (2020). The causes of sea-level rise since 1900. Nature, 584(7821), 393-397.
Nicholls, R. J., Lincke, D., Hinkel, J., Brown, S., Vafeidis, A. T., Meyssignac, B., ... & Fang, J. (2021). A global analysis of subsidence, relative sea-level change and coastal flood exposure. Nature Climate Change, 11(4), 338-342.
Citation: https://doi.org/10.5194/egusphere-2023-14-RC1 -
AC4: 'Reply on RC1', Robert Kopp, 16 May 2023
We thank the reviewer for their comments. Please see detailed response attached.
Regarding the reviewer's challenges installing FACTS, we were not able to replicate all of them. However, we have made a few changes intended to simplify the installation of FACTS 1.0.0. Most notably, we have adopted more generally applicable (though slower) defaults for the R setup in the emulandice module, added details of the set up of this module to the Quick Start instructions, and created a script (vm_factsenvsetup.sh) that is successfully able to set up FACTS within a vanilla Ubuntu Focal virtual machine.
-
AC4: 'Reply on RC1', Robert Kopp, 16 May 2023
-
RC2: 'Comment on egusphere-2023-14', Luke Jackson, 15 Jun 2023
General Comments
This paper outlines a modular platform designed to harmonise and internally calculate (tidal-datum-epoch) mean sea-level contributions from all major global and local sea-level components that are dependent upon a climate model emulator, and post-processed to localise and applied to extreme water-levels. The modular structure enables different combinations of sea-level component emulators/datasets within a fully probabilistic framework to be enabled thus accounting for aleatory and epistemic uncertainty. The GSL and New York example demonstrate the utility of the framework effectively. Scientifically, the framework benefits from more than a decade of research that exploits the budgetary approach to probabilistic sea-level change developed by numerous researchers. The framework also shows great potential and flexibility – I hope this will become an evolving resource that the community can utilise in future.
Overall, this is a welcome piece of work to the research community. It is carefully written and, in most places, clear to follow. There are a number of places where additional detail is required, ideally an additional case study would be shown, and a few Figures/Tables need updating.
Specific Comments
The Introduction has a strong IPCC focus. While this provides context, additional references to key work is important – particularly the development of a budgetary approach to SL, which is essential to the process-based method of projection.
The choice of Workflows presented focuses specifically on combinations of AIS/GrIS emulations/outputs. The results certainly clarify the IPCC decision-making process (medium/low confidence) but showing a more diverse selection of modules would showcase the FACTs framework more effectively as an assessment tool (e.g., VLM). Adding a separate city-level case study where VLM is focused upon (in addition to the current NYC example showcasing ice sheet combinations) would be valuable.
The issue of a common reference timescale is only partially addressed. This is pertinent given your mention of IPCC AR5 (ref timescale 1986-2005) and some sea-level components (e.g., GrIS SMB Fettweis et al. 2013) that are relative to an alternative baseline (e.g., ~1970s). Highlighting this earlier and how this is dealt with (either as a post-processing step or module specific step) is very important. Likewise, you need to explain how/if this timescale can be user defined.
Vertical Land Motion needs additional detail, and the issue of double counting needs to be addressed in the framework. If the barystatic fingerprints to be scaled use RSL, then they will contain a VLM component (as does the GIA RSL fingerprint). Scaling and summation of SL components within the third “Step” of the framework, would then induce possible double counting if the VLM component has not been corrected for this (arguable small) component. This would be true of K14 VLM or NZ VLM, as you allude to in the discussion.
Technical Corrections
Line 5: rephrase “a modular … sea-level rise”, needs to allude to individual SL components to generate global, regional, ESL projections.
Line 35-37: Highlight earlier work of Slangen et al. (2012) that led into AR5.
Line 43: “They also … core elements” – rephrase it is not clear what you mean here.
Line 44: “these studies” – which studies? No reference to specifics or examples.
Line 44: K14 and K17 do not refer specifically to “ProjectSL/LocalizeSL Framework” despite these being associated with the coded release. Rephrase to be more general and highlight additional frameworks beyond K14/K17.
Line 47: “This assumption …” rephrase for grammar, and an example (ideally non-SL to aid the reader) would be useful.
Line 49: remove “in”, and add “different components” to “different sea-level components”
Line 69: Slangen et al. in press – now published – update
Line 82: Terminology “Steps” versus “step” – is there a different between these?
Line 95: “total” – you can only refer to total if all SL components at an appropriate scale are accounted for.
Line 103: “combined” – the notion of combining workflows needs articulating – this is not explained in the examples either – do you mean something like Sweet et al. (2022)?
Line 113: “RADICAL-SAGA” is not discussed in the manuscript – what is it’s purpose?
Line 124: “exposes” – what do you mean by this?
Line 124/5: “Those constructs … tasks” is effectively repeated on line 128-129 – consider merging for clarity
Line 125: Terminology “Ensemble” appears here then isn’t mentioned elsewhere.
Line 131: “failures of Tasks …” – what about failed Stages or Pipelines – how does the framework deal with these higher-level issues?
Line 137-140: RADICAL Tools, their roles, Pipeline, Stage and Task need to be included within the schematic diagram (Figure 1) to better communicate section 2.2
Figure 1: see comment above. Including the 7 workflows discussed is of limited value if this is a conceptual diagram – a number of “example” workflows showcasing Task, stage and Pipelines would serve better.
Line 148: “emissions scenario as input” – this is the only input/variability that is feasible within this module? For example, how many ocean layers are used (is it a default FAIR setup?)?
Section 2.3.2 would benefit from each sea-level component being within a separate subsection (e.g., Section 2.3.2.1 Generic) rather than an italicised header that is inline with the main text.
Line 157: facts/directsample : does not appear in Figure 1
Line 165: FAIR 1.0 – should this be FACTS1.0 ?
Line 168/169: “The emulandice …” new paragraph
Line 170: “simulations and demonstrates” to “simulations. These demonstrate”
Line 184: does Levermann et al. 2020 refer to the original code or the modification to speed it up? If the former then move the citation prior to the comma in Line 183.
Line 196: “fixed rate of mass loss” – does this rate refer to basal melting or SMB or both?
Line 196/197: “This assumption, … underestimate …” Does it? By how much?
Line 214: “through” to “to”
Line 215/216: “In IPCC AR6 … projections” What do you mean by “applied in the context of” – this is ambiguous.
Line 219: “were appled” (spelling) and rephrase – sentence reads oddly – perhaps alternative wording to “applied”
Line 221: “includes” to “include”
Line 221: “regional scaling” to “regionalisation of their global SL equivalent contribution” (or similar)
Line 224: needs more information on the fingerprints (e.g., all barystatic components – grouped or separate or both such as AIS or WAIS, APIS, EAIS or by-sector, realistic (observed) or uniform, range of earth properties or not?)
Line 239/240: Remove “(In this manuscript, …)”
Line 257: “(Fox-Kemper et al., 2021))” to “Fox-Kemper et al., (2021)”
Line 276: “The model-based …” – not clear if this relates to module or K14 method
Line 288/289: “Sensitivity tests …” – are these published somewhere? If not, then inclusion within an Appendix would be very useful.
Line 290: “The spatiotemporal model” – new paragraph but I assume you are talking about VLM? Needs restating.
Line 293: “tuned” – what do you mean by this? Is this a scaling procedure/process to minimise misfit to observations?
Line 295-297: much more detail on NZInsar module needed as vertical reference frame of InSAR/GPS differs from RSL benchmarks regionally. The way rates etc from this module are harmonised is important to explain. Likewise, direct VLM observation will contain components of VLM from each barystatic SL component and GIA – how are these accounted for (prior or post Step 3)?
Line 306: “Annual means … stage” place this before fitting sentence to order processing steps correctly.
Line 308: “Below the … (Buchanan et al., 2016).” I didn’t understand this sentence – sorry – rephrase
Line 311: Need a statement here that the dynamic evolution of ESL (due to changing atmos-ocean clim. Variability) is not accounted for in the current framework.
Line 313: Remind the reader of Workflows here.
Line 333: “pseudo-random” – what do you mean by this?
Line 338: temperature should include a ° symbol
Line 358: “cm” to “m”
Line 370: stay consistent with units – switch to m
Table 3: add GSAT into title; rewrite footer for consistency so that T and GMSL are clearly identified with different baselines rather than using “except”; Use an asterisk to refer to specific modules that use complimentary RCP rather than the SSP, and refer to this in the footer – reader doesn’t have to go hunting elsewhere in the text.
Figure 3: Present time to 2100 rather than 2150 so that it is consistent with other Figures. Since you are not showing the uncertainty change shape after 2100 the thick/thin bars could mislead the reader; caption spelling mistake “think” to “thin”.
Line 457-464: Missing detail on GIA (as distinct from VLM), such as GIA uncertainty (e.g., Melini & Spada, 2019) and present-day GRD uncertainty (e.g., Cederberg et al. 2023).
Line 462: “scenarios” – should mention non-linear VLM behaviour too.
Line 478-484: See Specific Comment on common timescales – emphasis on compatibility of component-based timescale including use of observation/projection-data to supplement historical runs to bridge the gap between modelling intercomparison projects remains important.
Citation: https://doi.org/10.5194/egusphere-2023-14-RC2 -
AC5: 'Reply on RC2', Robert Kopp, 26 Jun 2023
We thank Dr. Jackson's for their detailed and helpful comments. We respond to his specific comments here, and will address his technical comments in the revision of the manuscript.
- The Introduction has a strong IPCC focus. While this provides context, additional references to key work is important – particularly the development of a budgetary approach to SL, which is essential to the process-based method of projection.
Per the reviewer’s suggestion under Technical Corrections, we have added a reference to Slangen et al. (2012) in the narrative leading up to AR5. We have also clarified that the reference to Fox-Kemper et al. (2021) describing the “numerous subsequent studies” is to the overview in section 9.6.3.1. We have also added: “Examples of open-source probabilistic sea-level projection frameworks include the ProjectSL/LocalizeSL framework (Kopp and Rasmussen, 2021) developed by Kopp et al. (2014, 2017), and BRICK (Wong et al., 2017). Additional studies present probabilistic RSL projection methodologies without associated open-source software releases (e.g., Slangen et al., 2014; Grinsted et al., 2015; Jackson and Jevrejeva, 2016; Le Cozannet et al., 2019; Palmer et al., 2020).”
- The choice of Workflows presented focuses specifically on combinations of AIS/GrIS emulations/outputs. The results certainly clarify the IPCC decision-making process (medium/low confidence) but showing a more diverse selection of modules would showcase the FACTs framework more effectively as an assessment tool (e.g., VLM). Adding a separate city-level case study where VLM is focused upon (in addition to the current NYC example showcasing ice sheet combinations) would be valuable.
We appreciate the reviewer’s suggestion regarding the inclusion of a VLM-focused case study, but the emphasis in this manuscript is on the distinctive approach FACTS takes to allow exploration of structural uncertainty. It provides an accurate representation of the current state of the code, which was strongly influenced by the needs of AR6. It is possible for users building off of FACTS develop additional modules, but at the moment we have only one VLM module (kopp2014/verticallandmotion) with global coverage.
We have clarified that the alternative VLM module (NZInsarGPS/verticallandmotion) simply directly samples gridded land motion data from an external file. We note that, in Naish et al. (2022), this module applies a gridded data file describing rates of land motion inferred from interferometric synthetic aperture radar (InSAR) data. We do not add an additional case study, but some of the challenges in thinking about appropriate VLM projections to incorporate into RSL projections are described in Naish et al. (2022, https://doi.org/10.1002/essoar.10511878.1).
- The issue of a common reference timescale is only partially addressed. This is pertinent given your mention of IPCC AR5 (ref timescale 1986-2005) and some sea-level components (e.g., GrIS SMB Fettweis et al. 2013) that are relative to an alternative baseline (e.g., ~1970s). Highlighting this earlier and how this is dealt with (either as a post-processing step or module specific step) is very important. Likewise, you need to explain how/if this timescale can be user defined.
This is a convention that must be consistent across modules. It is standard for modules to have ‘baseyear’ as a parameter (for AR6 examples, this is set to 2005, the midpoint of the 1995-2014 reference period).
We now note: “Configuration options such as the number of samples to run, the time points at which calculations are reported, and the reference period used for output can be globally specified but are implemented on a module-by-module basis.” In the discussion of directions for improvement, we state “The existing FACTS modules start projections in the 21st century.” This is less specific than previously, as – while 2005 (1995-2014) is used as the center point for the results presented here – most of the modules can handle a centerpoint at any time after the year 2000 without problems. The point of the discussion is the absence of historical data, not the way in which reference periods are handled.
- Vertical Land Motion needs additional detail, and the issue of double counting needs to be addressed in the framework. If the barystatic fingerprints to be scaled use RSL, then they will contain a VLM component (as does the GIA RSL fingerprint). Scaling and summation of SL components within the third “Step” of the framework, would then induce possible double counting if the VLM component has not been corrected for this (arguable small) component. This would be true of K14 VLM or NZ VLM, as you allude to in the discussion.
This is addressed in the response to reviewer 1. We have clarified that we use the term ‘vertical land motion’ in the previous draft to refer to long-term vertical land motion, of the sort reflected in century-scale analysis of tide-gauge data. Elastic deformation associated with contemporary land-ice and land-water mass redistribution is accounted for in the RSL projections via the static GRD fingerprints. For the K14 module, double-counting is an issue only to the extent that there are substantial 20th century trends in deformation associated with century-scale barystatic effects. Given the magnitude of 20th century average barystatic trends (order 1.0 mm/yr), this effect will introduce a bias in most places of <±0.2 mm/yr, or 2 cm/century. This is quite small relative to other sources of uncertainty, and comparable to sampling issues.
Double counting with respect to GIA Is not an issue for the K14 method; this is taken into account in the methodology.
Citation: https://doi.org/10.5194/egusphere-2023-14-AC5
-
AC5: 'Reply on RC2', Robert Kopp, 26 Jun 2023
-
EC1: 'Invitation to proceed to revisions', Andrew Wickert, 15 Jun 2023
Dear Dr. Kopp and co-authors,
Based on the comments received, including the constructive criticism of both referees, I encourage you to proceed to drafting a response to the reviewer comments and a revised manuscript.
WIth good wishes,
Andy
Citation: https://doi.org/10.5194/egusphere-2023-14-EC1
Peer review completion
Journal article(s) based on this preprint
Model code and software
Framework for Assessing Changes To Sea-level Kopp, R. E., Garner, G. G., Hermans, T. H. J., Jha, S., Kumar, P., Slangen, A. B. A., Turilli, M., Edwards, T. L., Gregory, J. M., Koubbe, G., Levermann, A., Merzky, A., Nowicki, S., Palmer, M. D., & Smith, C. https://github.com/radical-collaboration/facts
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
1,186 | 611 | 38 | 1,835 | 27 | 29 |
- HTML: 1,186
- PDF: 611
- XML: 38
- Total: 1,835
- BibTeX: 27
- EndNote: 29
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
9 citations as recorded by crossref.
- Anomalous Meltwater From Ice Sheets and Ice Shelves Is a Historical Forcing G. Schmidt et al. 10.1029/2023GL106530
- Melting ice and rising seas – connecting projected change in Antarctica’s ice sheets to communities in Aotearoa New Zealand R. Levy et al. 10.1080/03036758.2023.2232743
- DSCIM-Coastal v1.1: an open-source modeling platform for global impacts of sea level rise N. Depsky et al. 10.5194/gmd-16-4331-2023
- Sea-Level Rise in Pakistan: Recommendations for Strengthening Evidence-Based Coastal Decision-Making J. Weeks et al. 10.3390/hydrology10110205
- Unprecedented Historical Erosion of US Gulf Coast: A Consequence of Accelerated Sea‐Level Rise? J. Anderson et al. 10.1029/2023EF003676
- Sea Level Rise Learning Scenarios for Adaptive Decision‐Making Based on IPCC AR6 V. Völz & J. Hinkel 10.1029/2023EF003662
- Communicating future sea-level rise uncertainty and ambiguity to assessment users R. Kopp et al. 10.1038/s41558-023-01691-8
- Urban Planning of Coastal Adaptation under Sea-Level Rise: An Agent-Based Model in the VIABLE Framework S. Sengupta et al. 10.3390/urbansci7030079
- Sea level rise projections up to 2150 in the northern Mediterranean coasts A. Vecchio et al. 10.1088/1748-9326/ad127e
Gregory G. Garner
Tim H. J. Hermans
Shantenu Jha
Praveen Kumar
Aimée B. A. Slangen
Matteo Turilli
Tamsin L. Edwards
Jonathan M. Gregory
George Koubbe
Anders Levermann
Andre Merzky
Sophie Nowicki
Matthew D. Palmer
Chris Smith
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1037 KB) - Metadata XML