the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Improved Bathymetry Estimates Beneath Amundsen Sea Ice Shelves using a Markov Chain Monte Carlo Gravity Inversion (GravMCMC, version 1)
Abstract. Bathymetry beneath ice shelves in West Antarctica plays an important role in the delivery and circulation of warm waters to the ice-shelf bottom and grounding line. Large-scale bathymetric estimates can only be inferred through inversion of airborne gravity measurements. However, previous bathymetry inversions have not robustly quantified uncertainty in the bathymetry due to assumptions inherent in the inversion, and typically only produce a single model, making it difficult to propagate uncertainty into ocean and ice-sheet models. Previous inversions have sometimes considered uncertainties in bathymetry models due to background densities but have not quantified the uncertainty due to the non-uniqueness inherent in gravity and geological variability below ice shelves. To address these issues, we develop a method to generate ensembles of bathymetry models beneath the Crosson, Dotson, and Thwaites Eastern ice shelves with independent realizations of background density and geological variability, represented through the Bouguer gravity disturbance. We sample the uncertainty in the unknown geology below the ice shelves by interpolating the Bouguer disturbance using Sequential Gaussian Simulation. Each inversion is efficiently solved using a random walk Metropolis Markov Chain Monte Carlo approach which randomly updates blocks of bathymetry and accepts or rejects updates. Our ensembles of bathymetry models differ from previous estimates of bathymetry by hundreds of meters in some areas and show that the uncertainty in the Bouguer disturbance is the largest source of uncertainty. The different bathymetry models in the ensembles can be used in oceanographic models to place better bounds on sub-ice-shelf melting and future grounding line retreat.
- Preprint
(17217 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Comment on egusphere-2025-2680', Stephen Cornford, 27 Oct 2025
-
RC2: 'Comment on egusphere-2025-2680', Lu Li, 19 Nov 2025
This paper presents a new tool for inverting sub–ice-shelf bathymetry by using sequential Gaussian simulation to model the non-terrain gravity effect, thereby generating realizations of the terrain gravity effect. The gravity inversion is then solved using a Markov chain Monte Carlo approach. This provides an open-source tool for generating bathymetry ensembles with quantifiable uncertainties, which will be extremely useful for constraining basal melt rates. It is also very encouraging to see many open-source packages being used here to address new problems. Overall, this work represents an important contribution to the field, and I look forward to seeing this nice paper published following minor revisions.
Main point:
1. The introduction seems a bit long and contains many details about previous work. I suggest condensing the text to highlight the key message more clearly.
The first two paragraphs nicely outline the importance and challenges of modelling bathymetry. The challenges addressed here can be summarized in three parts:
- the unknown terrain gravity effect;
- the unknown density contrast required to solve the gravity inversion (non-uniqueness reason No. 1);
- noise in the observations, which means that even with a fixed density contrast and a known target gravity signal, the inversion remains non-unique (non-uniqueness reason No. 2).
You could then use examples to illustrate how previous studies have dealt with these challenges and conclude by introducing the new workflow.
Alternatively, you could just try to condense the information from Line 45 to Line 118.
2. In the text, it shows that the uncertainties in the Bouguer disturbance are the largest sources of error. However, keep in mind that the estimated Bouguer disturbance is also linked to the background density, which itself varies spatially. Rock densities can span a wide range—from sedimentary rocks (~2.4 to 2.6 g/cm³) to mafic and ultramafic rocks (~2.8 to 3.0 g/cm³). Including some discussion of these factors would be highly beneficial.
General comments:
Line 17: Remove Introduction and Background.
Line 24–27: This part is a bit hard to follow. Consider changing “play in grounding-line retreat” to “control grounding-line dynamics”?
Line 37: It not only has a large density contrast, but it is also close to the sensor.
Line 51: The residual field (Bouguer) reflects all non-topographic contributions, including crustal density, crustal thickness, and mantle. Similar comments apply to lines 57 and 174.
Line 165: Is this also the half-wavelength resolution for the ITGC data?
Line 171: Since everything is referenced to the ellipsoid, please double-check whether the water column is referenced correctly (sea level is not zero when referenced to the ellipsoid).
Citation: https://doi.org/10.5194/egusphere-2025-2680-RC2
Data sets
Stochastic Sub-ice-shelf Bathymetry for Thwaites, Crosson, and Dotson Ice Shelves Michael Field, Emma MacKie, Lijing Wang, Atsuhiro Muto, Niya Shao https://zenodo.org/records/15258019
Model code and software
GravMCMC Version 1.0 Michael Field https://github.com/mjfield2/stochastic_bathymetry
Stochastic Sub-ice-shelf Bathymetry for Thwaites, Crosson, and Dotson Ice Michael Field, Emma MacKie, Lijing Wang, Atsuhiro Muto, Niya Shao https://zenodo.org/records/15258019
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 529 | 121 | 18 | 668 | 13 | 10 |
- HTML: 529
- PDF: 121
- XML: 18
- Total: 668
- BibTeX: 13
- EndNote: 10
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
This paper describes a new bathymetry dataset covering some of the ASE ice shelves (notably not Pine Island Glacier, but the remainder are covered). The dataset is notable for including multiple realizations that map uncertainty in the estimation and could be used in (for example) an ensemble of ocean circulation models seeking to explore uncertainty in melt rates. The paper also introduces a method (with code) for producing that dataset, which is novel in several aspects. I recommend publication based on novelty, rigor, and valuable results, but at the same time I think the manuscript could be improved to make it accessible to a wider audience (e.g. the ocean and ice modelers that would use the data product and will have only limited experience with gravity measurements/inversions )
General Comments
The manuscript uses long text to describe the various parts of the gravity model when equations and diagrams would be easier to understand. This begins on line 45, with discussion of the Bouguer disturbance, terrain effect and so on. The gravity model is mentioned on p13 / equation 4 – but what is it, in terms of the above? A reader familiar with gravity inversions can no doubt write it down, but others will have to construct the equation from the text or look it up. I would have appreciated a simple diagram showing the physical origin of each term in the gravity model
I think the title and the conclusion (thought generally not the main text) is a little misleading in a way that understates the novelty/interest of the paper. The MCMC method seems to be used only as an optimization routine, and not to sample the uncertainty. The multiple realizations come from either the data (e.g the differing realizations of the Bouguer disturbance) or from different initial states for the iterative optimization (with the first being the more important). In other words, it seems as though a different optimizer could at least in principle replace MCMC and the ultimate results would be similar. This is reinforced by the difference between ensemble 3 and the other two: as the text makes clear, it is the statistical realization of the Bouguer disturbance that is driving the uncertainty. It that sense, the compound model of ensembles of solutions optimized to different data is reminiscent of random forest models (where the optimization is of decision trees by some method, but the differing division of data supplied drives variation between the trees)
Specific comments
L10 ‘inversion is efficiently solved’ -> ‘inversion is efficiently computed?’ On the grounds that ‘inversion’ is (slang for) a solution to an optimization problem, rather than the problem
L17 ‘Introduction and Background’ -> delete
L19 ‘amount of melt’ -> melt rate
L25 ‘grounding line retreat’ -> ice dynamics? GL retreat is just one aspect of the ice flow response to ice shelf ablation.
Discussion of the Bouguer disturbance (l45 on and L66 on) – see general comments, but also, the difference between anomaly/disturbance etc. Either ‘disturbance’ is an uncontroversial term – then use it without commentary – or not – in which case use the usual terms.
Literature review starting L63, and running to about L130. I found some parts difficult to follow, with very long paragraphs, especially the first. The last two paragraphs were easier for me – each one makes a single point and backs it up with brief, reasonably detailed examples.
L130 ‘One approach to generating multiple solutions is MCMC.’ Yes, but that is not the source of multiples here (as it is in other places and notably Mosegaard and Tarantola). Correct? I don’t think much change is needed here, just a different opening (MCMC methods have long been used in gravity inversion….)
L185 – discussion of upward continuation – introduced mid-paragraph and not defined.
L208 – 100 inversions run in parallel on (a machine that has < 20 cores). State the wall time for a single MCMC
Table 1 could be expanded to summarize the ensembles more completely, e.g ensemble 2 comprises 100 realizations, with realization n generated by selecting one item from an MCMC chain subject to the nth SGS realization etc.
240 (and on). The choice to use MCMC as optimization procedure, rather than for sampling a distribution is unusual (not least because the rate of convergence is low). Some comment on reasoning would help. I imagine that the degree od autocorrelation in each chain is high, especially given the block updates, so that long chains would be needed. Ruggeri et al Geophys. J. Int. (2015) 202, 961–975 have chains with ~10^6 iterations
Eqn 1. The notation \sigma(m) = \kappa \rho(m) L(m) is from Mosegaard and Tarantola, and I see others use it too. But more often Bayes rule is stated with more explicit reference to conditional probabilities, e.g p(m|d) = p(d|m) p(m) / p(d). Where p(d|m) = L(m), but now makes clear that the likelihood is the probability of the observed data given the model, and so on. I would also say that the prior cannot be ignored – rather you have chosen a uniform distribution p(m) between some bounds, which means that p(m | d) i= \kappa p(d|m) within said bounds.
L275 – seems like a strange way of saying ‘we have a formula for sigma(m) in terms of L(m), derived just a few lines back’
Table 2: define width and range.
Figure 7 / results in general. How do you know that 100 ensemble members is sufficient. You could plot some summary quantities F(n). e.g. the sum-of-squares over the domain of the standard deviation where F(n) means the F computed from members 1 to n
Discussion section: is quite long – section 5.3 seems to not propose anything concrete. I do agree that the general framework could be applied elsewhere, but the key innovations e.g the SGS of Bouguer correction are domain specific.