the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Revisiting the global mean ocean mass budget over 2005–2020
Abstract. We investigate the continuity and stability of GRACE and GRACE Follow-On satellite gravimetric missions by assessing the ocean mass budget at global scale over 2005–2020, focusing on the last years of the record (2015–2020) when GRACE and GRACE Follow-On faced instrumental problems. For that purpose, we compare the global mean ocean mass estimates from GRACE and GRACE Follow-On to the sum of its contributions from Greenland, Antarctica, land glaciers and terrestrial water storage estimated with independent observations. A significant residual trend of -1.60 ± 0.36 mm/yr over 2015–2018 is observed. We also compare the gravimetry-based global mean ocean mass with the altimetry-based global mean sea level corrected for the thermosteric contribution. We estimate and correct for the drift of the wet tropospheric correction of the Jason-3 altimetry mission computed from the on-board radiometer. It accounts for about 40 % of the budget residual trend beyond 2015. After correction, the remaining residual trend amounts to -0.90 ± 0.78 mm/yr over 2015–2018 and -0.96 ± 0.48 mm/yr over 2015–2020. GRACE and GRACE Follow-On data might be responsible for part of the observed non-closure of the ocean mass budgets since 2015. However, we show that significant interannual variability is not well accounted for by the data used for the other components of the budget, including the thermosteric sea level and the terrestrial water storage. Besides, missing contributions from the evolution of the deep ocean or the atmospheric water vapour may also contribute.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(3156 KB)
-
Supplement
(1752 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3156 KB) - Metadata XML
-
Supplement
(1752 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-716', M. D. Palmer, 05 Oct 2022
This paper deals with closure of the global ocean mass budget for the period 2005-2020 using a combination of observation of model-based data products. This type of study is crucial to our understanding of observed climate change and identifying potential issues / limitations in observing capability and/or data processing. The manuscript is well-written and the figures are of high quality. Dealing with numerous observation and model-based datasets is always challenging in terms of understanding all the potential issues. My main comments below encourage the authors to offer a bit more discussion of the non-budget-closure and to include more quantitative information about how this could be accommodated by the various hypotheses they put forward. They could also consider the relative sizes of estimated uncertainties and/or instances where the uncertainty estimation may be limited by ensemble characteristics, or for other reasons. I find the manuscript to be suitable for publication subject to addressing my comments below.
I'm unsure about the term Global Mean Ocean Mass used throughout the manuscript. I tend to think of this as having units of mass per unit area, but I think we are talking about changes in the total ocean mass? Perhaps there is some explanation or convention that could be mentioned and this point clarified at the start of the manuscript.
I would like to see the authors spend a bit more discussion on the non-closure of the budgets shown in Figures 6, 7 and 8. Fundamentally, when a budget does not close, it suggests that the uncertainties in one or more components have been underestimated. This point is worthy of some discussion and perhaps some speculation on where limited ensembles or diversity across the ensemble may be playing a role in the uncertainty estimation. I led a recent paper where we presented a generic framework for using ensembles to characterise uncertainty, which may be of interest to the authors, Palmer et al [2021]: https://iopscience.iop.org/article/10.1088/1748-9326/abdaec
In the closing sentences of section 4.3, the authors cite “deep ocean below 2000 m depth, the atmospheric water vapour variations and the permafrost thawing” as potential explanations for non-budget closure. I wonder if the authors could offer some more quantitative information in this regard. How large would the temperature variations below 2000 m depth have to account for the residuals, and so on for atmospheric water vapour, permafrost thawing. Would it be helpful if the Argo-based estimates of thermosteric sea-level change could include some estimate of the additional uncertainty below 2000 m? However, I suspect that the horizontal sampling uncertainty may still dominate - and perhaps this is underestimated, as mentioned in my comment on Figure 7. The paper by Allison et al [2019] https://iopscience.iop.org/article/10.1088/1748-9326/ab2b0b neatly illustrates the potential for mesoscale ocean “noise” to introduce spurious signals on a range of timescales, which may be inherent to the observational sampling and common to several (all?) data products? The authors may wish to comment in this regard.
Figure 6 seems to show a strong correlation between the WGHM TWS time series and the GRACE ensemble mean. Could these signals have been under- or over- estimated in one of the products? I think this point is worth some discussion. Can the authors comment on the different temporal resolution of the underlying datasets and ability to resolve the signals? This may also contribute to non-closure. This is mentioned briefly in section 4.3, but discussion of specific timeseries characteristics in section 4.1 and 4.2 could aid the reader.
Figure 7(a): The uncertainty on the in-situ thermosteric ensemble mean is much smaller than I would have expected. Was this informed simply from ensemble spread, as shown in Figure 5? Palmer et al [2021] argues that “structural uncertainty” from ensemble spread needs to be combined with some estimate of “internal/parametric uncertainty” in order to fully characterise the total uncertainty. I would encourage the authors to give this standpoint some consideration and update the uncertainty estimate if appropriate.
Figure 8: I don’t see panels (c) and (d) in this figure, as implied by the figure caption. I think perhaps the caption descriptions for (c) and (d) are intended to apply to panels (a) and (b)? Similar to Figure 6, there are some apparent correlations between the residuals and the TWS timeseries in particular. One thing to be cautious of is that fact that delayed-mode quality control of Argo floats typically takes 1-2 years to complete. Therefore, the last 1-2 years of data can be considered “provisional” and may be subject to revision, although I think this is generally considered more of an issue for salinity data, as noted here https://floats.pmel.noaa.gov/float-data-delayed-mode-quality-control
Figure 9: I’m not sure I understand the plot titles.The summation symbol would tend to suggest to me that the quantity subtracted is always (GIS + AIS + GIC + TWS), but this is not this is not the case for panel (c)? I think the similarity in the timeseries shown in Figure 9(b) strongly implies that GRACE and (GIS + AIS + GIC + TWS) must have similar timeseries, as shown in Figure 9(c), so I’m not sure how much additional information this really offers the reader. In addition to the trends, are there physical insights we can draw on from the variations/similarities in the residuals? In the figure caption, please clarify the precise period that trends are calculated over - e.g. 1st Jan 2015 to 31 Dec 2018 or similar.
Line 5: On the residual trend, it’s helpful to be explicit on whether the GRACE-determined mass trend is larger or smaller than the sum of individual components.
Line 12: Stylistic choice, but I would recommend replacing “Besides” with “In addition”. Same sentence, suggest replacing “water vapour” with “water content”.
Line 15: Please cite the latest IPCC AR6 report and specify a period for which the two-thirds statement applies (this has changed over time), as noted in the Working Group I summary for policymakers and Chapter 9 (Fox-Kemper et al, 2021).
Line 30: Typo? Replace “float” with “floats”.
Line 56: Replace “by the Argo float” with “by Argo floats”.
Line 90: Could you briefly comment on the choice of GIA dataset and what effect a different dataset might have on your analysis? Some idea of the importance of this for the reader would be helpful.
Lines 96-97: Can you comment on the physical plausibility of some of the very sharp drops in the datasets seen in 2017? E.g. what would this imply for rainfall over land and subsequent river flows? How do these timeseries compare with timeseries of terrestrial land water storage shown later in the manuscript? I suspect that this cursory analysis would support “noise” as the main candidate explanation.
Equation (1) and (2): I would suggest a different notation for Epsilon between these equations, perhaps Epsilon1 and Epsilon2. This would make clear that these quantities are fundamentally different (a dash is often used when one quantity is a proxy for the other) - they are related to completely independent datasets (?)
Equation (3): I don’t understand why the standard uncertainties are raised to the power 3 before summing them. Is there a reference you can cite that explains this approach? Or offer some additional explanation.
Figure 1: Please include an explanation of the units of mass in global mean sea level equivalent. A second y-axis in units of Gt or similar could usefully be included.
Figure 2 : Same comment as for Figure 1 applies here. Please consider this point for all subsequent figures (may be more appropriate to some figures than others).
Citation: https://doi.org/10.5194/egusphere-2022-716-RC1 - AC1: 'Reply on RC1', Anne Barnoud, 07 Jan 2023
-
RC2: 'Comment on egusphere-2022-716', Anonymous Referee #2, 17 Oct 2022
I have read the paper by Barmoud dealing with closure of the global ocean mass budget for the period 2005-2020. I agree with the other author that such study is crucial to our understanding of observed climate change and identifying potential problems and limittions in observing capability and/or data processing. In general, the manuscript is well-written, but there are a number of issues and clarifications that need to be dealt with. I also agree with the other reviewer to enc encourage more discussion of the non-budget-closure with respect to the various hypotheses presented in the paper.
One initial concern deals with the datasets. They have various coverage, and as such this needs to be accounted for in the comparison. One example is the altimetric ocean dataset. Apparently, this dataset is limited to 60N where as previous investigations have been limited to 66N. What is the reason for this limitation? It can not be the 200 km distance to shoreline but some other argument?
If the investigation is limited to 66S-6ON then all major contributors to mass changes are outside the ocean mask. Hence the authors NEED to revise the manuscript and compute the sea level fingerprints as all of the contributing datasets (GIS and AIS in particular) is completely outside this limitation. In my view, this is a requirement to perform this before the manuscript is published, as it might have a significant impact on the results.
Figure 6. I agree with the other reviewer that there is something wrong with the residuals. I also noticed that the computations/comparisons in this figure, unfortunately, ends sometimes in 2018 which really calls for an update to the time series before publication.
Equation (3): I agree with the other reviewer that there is something wrong with this equation.
Section 3.3. The paper claims that Glaciers in Greenland is left out because they are already a part of the Greenlandic estimates. This is, to my knowledge, incorrect. At least they are not a part of the estimates in Simonsen 2021 and Mankoff, 2021). This needs further clarification.
The authors devote large parts to the discussion on the contribution of a possible trend in the Jason-3 MWR/WTC being responsible of up to 40% of the differences. This is a critical point in the paper as it is referred to unpublished material by Bernaud, 2022 as a lot of the following discussion is related to Jason-3 issue.
A closer look at their own figure 6 brings me seriously doubt about this explanation that Jason is really the problem. Particularly the lower part of Figure 7 indicates that the difference between altimetry and other mass-contribution clearly diverged from late 2014/early 2015. This is more than a year before the launch of Jason-3 in 2016 delivering reliable data from March/April 2016 Wouldn’t this means that a more intuitive explanation would be that the older Jason-2 started drifting during its old age and the problem being that the tandem mission correction of the MWR between Jason-2->3 was in error?.
In my view the authors explanation of a drift in the Jason-3 radiometer is very vague. Particularly as the authros discuss the significant trend in the 2015-2018 period. During this period Jason-3 was only present 65% of the time (2016-04-2018.12) . If Jason-3 is responsible for 40% of the trend in this period the apparent trend in Jason-3 during its presence (in 65% of the time series) much consequently have been much larger. This should also be addressed in more detail.
Similarly to reviewer 1 I have an issue with the physical plausibility of the very sharp drops in the datasets seen in 2017? Please explain this. Could this be related to the missing GRACE-GRACE_FO during this period?.
When it comes to the discussion points in line 238-242 that potential evolution below 2000 m depth, permafrost thawing, and atmospheric water vapor, but In line 190 the authors already investigated and corrected for the deep ocean contribution which ranges up of 0.1 mm/year. Again this magnitude is very small compared to the difference seen, so I do not follow this argumentation.
All in all I find the issue on revising the global ocean mass budget extremely important but the paper and findings are presently not adequately convincing for publication.
Without computing the full fingerprints of the contribution to deal with the limited ocean mask I do not think that the paper presents substantial clear and new information. Particularly as many results are only presented up to 2018.
I suggest the authors to revisit the data perform the correct computation and extend the timeseries as much as possible so the paper and the conclusions could really represent the 2005-2020 period.
Citation: https://doi.org/10.5194/egusphere-2022-716-RC2 - AC2: 'Reply on RC2', Anne Barnoud, 07 Jan 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2022-716', M. D. Palmer, 05 Oct 2022
This paper deals with closure of the global ocean mass budget for the period 2005-2020 using a combination of observation of model-based data products. This type of study is crucial to our understanding of observed climate change and identifying potential issues / limitations in observing capability and/or data processing. The manuscript is well-written and the figures are of high quality. Dealing with numerous observation and model-based datasets is always challenging in terms of understanding all the potential issues. My main comments below encourage the authors to offer a bit more discussion of the non-budget-closure and to include more quantitative information about how this could be accommodated by the various hypotheses they put forward. They could also consider the relative sizes of estimated uncertainties and/or instances where the uncertainty estimation may be limited by ensemble characteristics, or for other reasons. I find the manuscript to be suitable for publication subject to addressing my comments below.
I'm unsure about the term Global Mean Ocean Mass used throughout the manuscript. I tend to think of this as having units of mass per unit area, but I think we are talking about changes in the total ocean mass? Perhaps there is some explanation or convention that could be mentioned and this point clarified at the start of the manuscript.
I would like to see the authors spend a bit more discussion on the non-closure of the budgets shown in Figures 6, 7 and 8. Fundamentally, when a budget does not close, it suggests that the uncertainties in one or more components have been underestimated. This point is worthy of some discussion and perhaps some speculation on where limited ensembles or diversity across the ensemble may be playing a role in the uncertainty estimation. I led a recent paper where we presented a generic framework for using ensembles to characterise uncertainty, which may be of interest to the authors, Palmer et al [2021]: https://iopscience.iop.org/article/10.1088/1748-9326/abdaec
In the closing sentences of section 4.3, the authors cite “deep ocean below 2000 m depth, the atmospheric water vapour variations and the permafrost thawing” as potential explanations for non-budget closure. I wonder if the authors could offer some more quantitative information in this regard. How large would the temperature variations below 2000 m depth have to account for the residuals, and so on for atmospheric water vapour, permafrost thawing. Would it be helpful if the Argo-based estimates of thermosteric sea-level change could include some estimate of the additional uncertainty below 2000 m? However, I suspect that the horizontal sampling uncertainty may still dominate - and perhaps this is underestimated, as mentioned in my comment on Figure 7. The paper by Allison et al [2019] https://iopscience.iop.org/article/10.1088/1748-9326/ab2b0b neatly illustrates the potential for mesoscale ocean “noise” to introduce spurious signals on a range of timescales, which may be inherent to the observational sampling and common to several (all?) data products? The authors may wish to comment in this regard.
Figure 6 seems to show a strong correlation between the WGHM TWS time series and the GRACE ensemble mean. Could these signals have been under- or over- estimated in one of the products? I think this point is worth some discussion. Can the authors comment on the different temporal resolution of the underlying datasets and ability to resolve the signals? This may also contribute to non-closure. This is mentioned briefly in section 4.3, but discussion of specific timeseries characteristics in section 4.1 and 4.2 could aid the reader.
Figure 7(a): The uncertainty on the in-situ thermosteric ensemble mean is much smaller than I would have expected. Was this informed simply from ensemble spread, as shown in Figure 5? Palmer et al [2021] argues that “structural uncertainty” from ensemble spread needs to be combined with some estimate of “internal/parametric uncertainty” in order to fully characterise the total uncertainty. I would encourage the authors to give this standpoint some consideration and update the uncertainty estimate if appropriate.
Figure 8: I don’t see panels (c) and (d) in this figure, as implied by the figure caption. I think perhaps the caption descriptions for (c) and (d) are intended to apply to panels (a) and (b)? Similar to Figure 6, there are some apparent correlations between the residuals and the TWS timeseries in particular. One thing to be cautious of is that fact that delayed-mode quality control of Argo floats typically takes 1-2 years to complete. Therefore, the last 1-2 years of data can be considered “provisional” and may be subject to revision, although I think this is generally considered more of an issue for salinity data, as noted here https://floats.pmel.noaa.gov/float-data-delayed-mode-quality-control
Figure 9: I’m not sure I understand the plot titles.The summation symbol would tend to suggest to me that the quantity subtracted is always (GIS + AIS + GIC + TWS), but this is not this is not the case for panel (c)? I think the similarity in the timeseries shown in Figure 9(b) strongly implies that GRACE and (GIS + AIS + GIC + TWS) must have similar timeseries, as shown in Figure 9(c), so I’m not sure how much additional information this really offers the reader. In addition to the trends, are there physical insights we can draw on from the variations/similarities in the residuals? In the figure caption, please clarify the precise period that trends are calculated over - e.g. 1st Jan 2015 to 31 Dec 2018 or similar.
Line 5: On the residual trend, it’s helpful to be explicit on whether the GRACE-determined mass trend is larger or smaller than the sum of individual components.
Line 12: Stylistic choice, but I would recommend replacing “Besides” with “In addition”. Same sentence, suggest replacing “water vapour” with “water content”.
Line 15: Please cite the latest IPCC AR6 report and specify a period for which the two-thirds statement applies (this has changed over time), as noted in the Working Group I summary for policymakers and Chapter 9 (Fox-Kemper et al, 2021).
Line 30: Typo? Replace “float” with “floats”.
Line 56: Replace “by the Argo float” with “by Argo floats”.
Line 90: Could you briefly comment on the choice of GIA dataset and what effect a different dataset might have on your analysis? Some idea of the importance of this for the reader would be helpful.
Lines 96-97: Can you comment on the physical plausibility of some of the very sharp drops in the datasets seen in 2017? E.g. what would this imply for rainfall over land and subsequent river flows? How do these timeseries compare with timeseries of terrestrial land water storage shown later in the manuscript? I suspect that this cursory analysis would support “noise” as the main candidate explanation.
Equation (1) and (2): I would suggest a different notation for Epsilon between these equations, perhaps Epsilon1 and Epsilon2. This would make clear that these quantities are fundamentally different (a dash is often used when one quantity is a proxy for the other) - they are related to completely independent datasets (?)
Equation (3): I don’t understand why the standard uncertainties are raised to the power 3 before summing them. Is there a reference you can cite that explains this approach? Or offer some additional explanation.
Figure 1: Please include an explanation of the units of mass in global mean sea level equivalent. A second y-axis in units of Gt or similar could usefully be included.
Figure 2 : Same comment as for Figure 1 applies here. Please consider this point for all subsequent figures (may be more appropriate to some figures than others).
Citation: https://doi.org/10.5194/egusphere-2022-716-RC1 - AC1: 'Reply on RC1', Anne Barnoud, 07 Jan 2023
-
RC2: 'Comment on egusphere-2022-716', Anonymous Referee #2, 17 Oct 2022
I have read the paper by Barmoud dealing with closure of the global ocean mass budget for the period 2005-2020. I agree with the other author that such study is crucial to our understanding of observed climate change and identifying potential problems and limittions in observing capability and/or data processing. In general, the manuscript is well-written, but there are a number of issues and clarifications that need to be dealt with. I also agree with the other reviewer to enc encourage more discussion of the non-budget-closure with respect to the various hypotheses presented in the paper.
One initial concern deals with the datasets. They have various coverage, and as such this needs to be accounted for in the comparison. One example is the altimetric ocean dataset. Apparently, this dataset is limited to 60N where as previous investigations have been limited to 66N. What is the reason for this limitation? It can not be the 200 km distance to shoreline but some other argument?
If the investigation is limited to 66S-6ON then all major contributors to mass changes are outside the ocean mask. Hence the authors NEED to revise the manuscript and compute the sea level fingerprints as all of the contributing datasets (GIS and AIS in particular) is completely outside this limitation. In my view, this is a requirement to perform this before the manuscript is published, as it might have a significant impact on the results.
Figure 6. I agree with the other reviewer that there is something wrong with the residuals. I also noticed that the computations/comparisons in this figure, unfortunately, ends sometimes in 2018 which really calls for an update to the time series before publication.
Equation (3): I agree with the other reviewer that there is something wrong with this equation.
Section 3.3. The paper claims that Glaciers in Greenland is left out because they are already a part of the Greenlandic estimates. This is, to my knowledge, incorrect. At least they are not a part of the estimates in Simonsen 2021 and Mankoff, 2021). This needs further clarification.
The authors devote large parts to the discussion on the contribution of a possible trend in the Jason-3 MWR/WTC being responsible of up to 40% of the differences. This is a critical point in the paper as it is referred to unpublished material by Bernaud, 2022 as a lot of the following discussion is related to Jason-3 issue.
A closer look at their own figure 6 brings me seriously doubt about this explanation that Jason is really the problem. Particularly the lower part of Figure 7 indicates that the difference between altimetry and other mass-contribution clearly diverged from late 2014/early 2015. This is more than a year before the launch of Jason-3 in 2016 delivering reliable data from March/April 2016 Wouldn’t this means that a more intuitive explanation would be that the older Jason-2 started drifting during its old age and the problem being that the tandem mission correction of the MWR between Jason-2->3 was in error?.
In my view the authors explanation of a drift in the Jason-3 radiometer is very vague. Particularly as the authros discuss the significant trend in the 2015-2018 period. During this period Jason-3 was only present 65% of the time (2016-04-2018.12) . If Jason-3 is responsible for 40% of the trend in this period the apparent trend in Jason-3 during its presence (in 65% of the time series) much consequently have been much larger. This should also be addressed in more detail.
Similarly to reviewer 1 I have an issue with the physical plausibility of the very sharp drops in the datasets seen in 2017? Please explain this. Could this be related to the missing GRACE-GRACE_FO during this period?.
When it comes to the discussion points in line 238-242 that potential evolution below 2000 m depth, permafrost thawing, and atmospheric water vapor, but In line 190 the authors already investigated and corrected for the deep ocean contribution which ranges up of 0.1 mm/year. Again this magnitude is very small compared to the difference seen, so I do not follow this argumentation.
All in all I find the issue on revising the global ocean mass budget extremely important but the paper and findings are presently not adequately convincing for publication.
Without computing the full fingerprints of the contribution to deal with the limited ocean mask I do not think that the paper presents substantial clear and new information. Particularly as many results are only presented up to 2018.
I suggest the authors to revisit the data perform the correct computation and extend the timeseries as much as possible so the paper and the conclusions could really represent the 2005-2020 period.
Citation: https://doi.org/10.5194/egusphere-2022-716-RC2 - AC2: 'Reply on RC2', Anne Barnoud, 07 Jan 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
309 | 208 | 16 | 533 | 50 | 5 | 3 |
- HTML: 309
- PDF: 208
- XML: 16
- Total: 533
- Supplement: 50
- BibTeX: 5
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Julia Pfeffer
Anny Cazenave
Michaël Ablain
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3156 KB) - Metadata XML
-
Supplement
(1752 KB) - BibTeX
- EndNote
- Final revised paper