the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Technical note: Large offsets between different datasets of sea water isotopic composition: an illustration of the need to reinforce intercalibration efforts
Abstract. We illustrate offsets in seawater isotopic composition between the data sets presented in two recent studies and the LOCEAN seawater isotopic composition dataset, as well as other data for the same years and same regions. These comparisons are carried in the surface waters, for one in the North and South Atlantic, and for the other in the subtropical South-East Indian Ocean. The observed offsets between data sets which exceed 0.10 ‰ in δ18O and 0.50 ‰ in δ2H might in part reflect seasonal or spatial variability. However, they are rather systematic, so they likely originate, at least partially, from different instrumentations and protocols used to measure the water samples. They need to be adjusted in order to ultimately merge the different data sets. This highlights the need to actively share seawater isotopic composition samples dedicated to specific intercomparison of data produced in the different laboratories.
- Preprint
(758 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-3009', Anonymous Referee #1, 25 Oct 2024
General comments: This short note presents some seawater oxygen and hydrogen isotope data generated in the lead author’s laboratory, and conducts a comparison with other datasets from the same ocean regions generated in other labs. The major finding appears to be that there are troubling differences in the data, possibly instrumental or procedural in origin. Whilst concerning, this is not an especially surprising finding – I think that most scientists who work with such data are quite attuned to the possibility of inter-laboratory offsets, and in many cases will conduct their own ad-hoc comparisons to try and make datasets consistent where possible.
Different instrumentation and protocols could be partly to blame, in some cases, and the paper outlines the possible origins of some of these differences. I suspect that long-term maintenance of standards is also an issue – while all samples are supposed to be measured relative to known standards (e.g. VSMOW), the cost and availability of VSMOW is such that samples are virtually never measured against it directly, but instead against intermediate (secondary) laboratory standards, which are themselves measured relative to e.g. VSMOW. Any drift or inaccuracy in the known composition of these secondary standards will thus feed through into the data.
The current paper does a reasonable job of highlighting these sorts of issues in the context of the datasets examined, but overall the treatment is relatively superficial. The analysis essentially compares a couple of datasets and considers whether differences might be “real” (i.e. deriving from oceanic temporal or spatial variability), before concluding that they are most likely not. One could argue that an even more useful analysis would assess all available public isotope datasets and conduct some sort of crossover analysis that would tabulate offsets. (I believe such an activity is being conducted at AWI Bremerhaven, and I note the acknowledgement to one of the key researchers there – perhaps there is scope to ramp up that dialogue and deepen the analysis presented here, especially given AWI representation amongst the authorship team for this paper?). This would enable the full extent of the problem to be at least quantified, and possibly its root causes to be better identified, if the derived offsets were categorised by method, protocol, instrumentation etc. I realise this is a much bigger job than the authors intended to undertake, but I feel it would be significantly more useful.
Concerning ways forward, I think a minimum requirement of the paper should be a clear statement on how the issue should be addressed in future. The paper alludes to some possible methods that might be used to alleviate/address the problem (e.g. exchange of samples between labs), but what is needed are firm recommendations and suggestions for who can follow them up and (critically) who should oversee the process. For example, it is worth noting that GO-SHIP has established protocols to deal with exactly this issue for other variables to ensure their intercomparability, and might be well-placed to transfer/apply those protocols to seawater isotope data also. Alternatively, possibly IAPSO has a role?
What is definitely not wanted is each lab or user conducting their own intercomparison/correction exercises, since the resulting datasets (while internally more consistent than before) will still not be comparable across them all, if different methods to intercompare/correct are used.
Overall, the paper highlights an issue that is concerning but not surprising. I have no objection to the paper being published – I believe what it says is true, and the topic is an important one – but a fuller treatment of the issue would be even more beneficial to the community, including very specific recommendations on how it should be addressed.
Minor points.
Title: perhaps should mention “… and suggested ways forward”, or suchlike? Highlighting the issue is important, but even more useful would be outlining what needs to be done to resolve it.
Abstract: Is written from the context of comparing LOCEAN datasets to others, which is sensible (I’m sure it is what the authors’ starting point was) – but perhaps just saying “intercomparing available public datasets” would be more balanced?
Line 22. “carried out”?
Line 24. Punctuation is important here: “… between data sets, which exceed 0.1 in d18O and 0.5 in d2H, …” – the commas matter!
Page 1 and 2. Just a stylistic thing, but these paragraphs are really long… it would help the reader to break them into smaller chunks.
Line 123. When examining offsets, it’s a bit unsatisfying that the isotope data from Polarstern were not collected from the same waterline as the TSG. Some quantification of the impact of this would be useful, especially if the sensors/intake were at different depths and/or different positions on the hull.
Line 202. ACC fronts are usually capitalised – “Polar Front” etc.
Various places. “pss” seems to have crept in as a salinity unit. If the data are indeed measured and presented on the practical salinity scale (as stated), then the salinities are ratios and hence do not have units.
Citation: https://doi.org/10.5194/egusphere-2024-3009-RC1 - AC1: 'Reply on RC1', Gilles Reverdin, 03 Dec 2024
-
RC2: 'Comment on egusphere-2024-3009', Anonymous Referee #2, 05 Nov 2024
Review
Technical note: Large offsets between different datasets of sea water isotopic composition: an illustration of the need to reinforce intercalibration efforts
Gilles Reverdin, Claire Waelbroeck, Antje H. L. Voelker, Hanno Meyer
This technical note highlights the important consideration of systematic offsets between seawater isotopic values measured using different instrumentation and/or in different laboratories. Isotopic measurements have been largely underutilized to-date, and being able to reliably compare data collected and/or analyzed by different parties will be key in developing a cohesive understanding of the ocean isotopic system.
The authors highlight the need for establishing “well-accepted systematic guidelines for
data production and quality control”. Further, they advocate “enhancing scientific exchange between the different institutes needs to be actively pursued, in order to reduce the errors when merging different datasets”. I strongly agree with these main conclusions/recommendations, and feel that ongoing, wide-spread cross-calibration between institutes and research groups is the only way to achieve this.
While I am in agreement with the overall intention of the paper, I think it is difficult to make this point, as presented, using surface samples alone. Some deeper digging beyond the offsets being 'rather systematic' would help strengthen the case.
There are a few main points in the text that I feel could be addressed more carefully and/or given some more thought and discussion.
Main point 1
This technical note focuses on surface water samples. Surface waters are much more variable seasonally and geographically than deep water masses due to evaporation/precipitation/freezing/melting. While this is acknowledged within the paper, I’m not totally convinced that the offsets observed between the relatively limited datasets are lab/method based rather than seasonal and/or geographic differences.
Without many similar datasets demonstrating the extent of natural variability, or direct replicate analyses performed at different labs, it’s difficult to make a convincing case that the reported differences are analytical offsets and not observed natural geographic/temporal variation. In absence of direct replicate analysis/cross-calibrations, the exercise detailed in this paper may be better performed with deep water samples, with less variable isotopic compositions.
Main point 2
I have a hard time recommending the application of a ‘correction’ offset between datasets without a direct cross-calibration between the two labs, it is impossible to know if the difference in values is an offset (from technique, reference material, or sample evaporation), or a true difference.
While correcting for a calibration offset between labs could be acceptable with appropriate inter-lab cross-calibration efforts, trying to ‘correct’ data where samples may have been compromised involves considerable risk, and could instill a false sense of confidence in intercomparison efforts.
More recently published material indicates that differences between analytical techniques (i.e. IRMS vs CRDS) are insignificant (i.e. less than analytical precision). Reference material errors can occur, and the only way to identify that for certain is cross-calibration between the facilities in question. Unfortunately, facilities are often reluctant to spend the time/resources on cross calibration, claiming that if all labs are (e.g.) referenced against VSMOW, then there is no need (which is of course true, in theory, but not all labs operate the same way with regards to calibration, replication, etc.). This is a problem that must be solved with buy-in across the community, and a commitment to a longer-term vision of isotopic data (vs short-term focus on a study from a single cruise, where offsets between labs/instruments are often immaterial).
An offset resulting from sample compromise prior to analysis is the most difficult case. Unless the offsets are very large, it is not truly possible to know which samples may have been subject to evaporation during storage, or to what extent. Even if we can be confident that some samples from a cruise were compromised, it cannot be assumed that each sample was subject to the same amount of evaporation/fractionation offset. An attempt to correct some number of collected samples from a dataset could have the unintended consequence of unnecessarily offsetting samples within that set that were not actually subject to compromise/evaporation. My opinion is that the best approach in this scenario is to discard the data that has clearly been compromised (i.e. well outside of established natural variability) and leave the rest alone.
Unless a direct inter-lab cross-calibration has been performed, I would rather not see ‘correction’ offsets applied to datasets in attempt to make them more comparable. Without the cross-calibration data, there is simply no way to know whether one is truly making the correction they think they are.
--
A few specific minor points:
L40: Glacial melt from ice shelves can impact isotopic composition well below the surface ocean – down to 800m in the Amundsen Sea, Antarctica. (Biddle et al., 2019; Hennig et al., 2024; Randall-Goodwin et al., 2015)
L66: More recent studies have demonstrated significantly better precision with CRDS systems - on par, or better than most published IRMS data. (0.025‰ in δ18O. Current manufacturer-stated precision for Picarro systems is ±0.025‰ in δ18O and ±0.1‰ for δD). Voelker et al., (2015) achieved precision of ±0.06‰ δ18O (in-run precision ±0.1‰ δ18O; ±1‰ δD), while Hennig et al., (2024) achieved precision of ±0.02‰ δ18O (in-run precision of ±0.04‰ δ18O). I’m not sure whether an advancement in technology, or a change in methodology is responsible – but it doesn’t seem that modern precision is meaningfully different between IRMS and CRDS.
L78: Is this significant? Other studies (Hennig et al., 2024; Walker et al., 2016) showed equivalence within instrumental precision between IRMS and CRDS techniques. I’m not sure one can analytically justify applying an offset to data that is smaller than analytical precision.
Citation: https://doi.org/10.5194/egusphere-2024-3009-RC2 - AC1: 'Reply on RC1', Gilles Reverdin, 03 Dec 2024
-
EC1: 'Comment on egusphere-2024-3009', Karen J. Heywood, 03 Dec 2024
Thank you for your responses to the two reviewers. I look forward to reading your revised paper. I think you still need to convince the reviewers and myself that you have a story to tell. Also please remove the pss from the figures as this is just incorrect. You can include in the caption that it is salinity measured on the practical salinity scale. Or even better use the 2010 equation of state TEOS-10 and quote absolute salinity (g/kg) throughout.
Citation: https://doi.org/10.5194/egusphere-2024-3009-EC1 -
AC2: 'Reply on EC1', Gilles Reverdin, 03 Dec 2024
Thank you for your comment.
I agreed in the response letter to remove the pss from the figures, (mentioning it instead from the figure captions), but will stick to practical salinity as it is clearly stated in the draft.
Here, as the samples are not associated with measured DIC, silicates, nitrates... adopting absolute salinity would be an unfortunate misleader. It is really a pity that so many people are adopting it who do not have the proper data to estimate it, and use it to report one (more or less complex) proxy of a conductivity measurement. At least, practical salinity as used here is very clear.PS: I have a question, which is how and where does I download the track-changed version of the paper?
Citation: https://doi.org/10.5194/egusphere-2024-3009-AC2
-
AC2: 'Reply on EC1', Gilles Reverdin, 03 Dec 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
172 | 67 | 107 | 346 | 4 | 7 |
- HTML: 172
- PDF: 67
- XML: 107
- Total: 346
- BibTeX: 4
- EndNote: 7
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1