the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Best practices for estimating turbulent dissipation from oceanic single-point velocity timeseries observations
Abstract. We provide best practices for estimating the dissipation rate of turbulent kinetic energy, ε, from velocity measurements in an oceanographic context. These recommendations were developed as part of the Scientific Committee on Oceanographic Research (SCOR) Working Group #160 "Analyzing ocean turbulence observations to quantify mixing". The recommendations here focus on velocity measurements that enable fitting the inertial subrange of wavenumber velocity spectra. The method examines the measurable range for this method of dissipation rates in the ocean, seas, and other natural waters. The recommendations are intended to be platform-independent since the velocities may be measured using bottom-mounted platforms, platforms mounted beneath the ice, or platforms directly on mooring lines once the data is motion-decontaminated. The procedure for preparing the data for spectral estimation is discussed in detail, along with the quality control metrics that should accompany each estimate of ε during data archiving. The methods are applied to four 'benchmark' datasets covering different flow regimes and two instrument types (acoustic-Doppler and time of travel). Problems associated with velocity data quality, such as phase-wrapping, spikes, measurement noise, and frame interference, are illustrated with examples drawn from the benchmarks. Difficulties in resolving and identifying the inertial subrange are also discussed, and recommendations on how these issues should be identified and flagged during data archiving are provided.
- Preprint
(5015 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Comment on egusphere-2025-4433', Anonymous Referee #1, 20 Oct 2025
-
RC2: 'Comment on egusphere-2025-4433', Anonymous Referee #2, 05 Nov 2025
Merit:
Estimates of TKE dissipation rate from inertial subrange fitting have not necessarily used consistent processing procedures from different groups, etc. The authors seek to publish a set of best practices for future research that follows. Although it does not include new scientific findings, I think this would be within the scope of EGU Ocean Science as a Technical Note and has potential to be useful to the community. I do have some minor comments and suggestions that I think would improve the paper.
General comments:
- The main point of this manuscript is to establish a set of “best practices”. While I agree that the methods in the manuscript are thorough and robust, I think it might be useful to discuss previous research in more detail and use this to guide justification of the new methods; i.e., how do the proposed methods differ from what has been used in the past and why are they superior? This, I think, would be important for 1) convincing readers that the proposed methods are indeed the best practice and 2) ultimately, having the community adopt them. This is probably best addressed in several different places throughout the text rather than in one place. To be fair, the authors have clearly thought about this issue and there are components of it already in the text. For example the justification on L218-224 is well referenced and is sort on in-line with what I think would improve the overall impact of the paper. However, I do not think the justification in other places is quite as thorough and convincing. A few places of these are pointed out below.
- I suggest that the authors consider sharing relevant parts of the code used to prepare and process the sample datasets. I think the main intention of the authors is for their methods to ultimately be utilized by the community. While my suggestion is optional, sharing code would make it much easier for others to adopt the suggested practices.
Line-by-line comments and suggestions:
L32 – Include the full name of MAVS when first stated. I see it later on, but it should be included here
L30-36 – I suggest to elaborate more on the reasons why the approaches are differently suited to high and low energy regimes.
Fig 2 – I suggest to remove the gray shading inside the triangles. It suggest that the shading represents dissipation rate, but that is not the case.
L81-82 – Add additional clarification. That is, I think the authors are referring to the longer-term distribution of epsilon here, not that epsilon itself does not change over faster timescales than the sampling frequency (which as the authors are surely aware can happen and does not necessarily signify a violation of stationarity).
Reading on, I see the authors include a nice explanation of this at L259… maybe also briefly mention it here.
L83-84 – Perhaps move this to the start of the paragraph?
L121 – I suggest to add a citation or additional explanation
L127-137 – Perhaps Fig 2 should be referred to at the start of the paragraph? It visually explains much of what is here, and I think may be easier to understand for readers new to the topic.
Table 1 title – U bar is 5th and 95th percentile, correct? Also, why not 2.5 and 97.5 to get the 95% confidence interval.
L154-159 – Although important for understanding the datasets, I feel this is a bit of an aside from the main text. If this is a suggestion as the format for data storage for future experiments, that should be clearly stated. From my experience, many (most?) published datasets are essentially just the “fourth level”, and if there is a suggestion for the inclusion of more information that should be discussed. In any case, I think the main objective is the methods for how to get epsilon, and perhaps this section can be pared down.
Reading on, this is necessary to explain for section 4, so maybe it can be incorporated into that section?
L162 – I think linear interpolation needs to be justified here. I understand the necessity to fill in gaps for constructing spectra, and think further explanation would be useful.
L164 – I don’t understand what “turbulence models” means in this context. I don’t think it’s the method of calculating epsilon, since that would still be the same within a dataset?
Sec 4.1 (until L211) – I’m not sure how useful parts of this section are. That is, I don’t know if it is useful to state that QC should be done on a case-by-case basis or to follow manufacturer recommendations. I understand it is important to QC and thus why the authors have discussed this, but perhaps it could be shortened in places.
Fig 4 – I suggest to put a box around the legend in panel a) for clarity and consistency with panel b).
L312-321 – See my general comment. This is a place where I think additional background information, and references that highlight deficiencies of other methods, would be useful. E.g. why is this spectral averaging procedure different and superior to others?
L340-356 – I think additional background information and referencing would be useful here, as well. Is there another study that has found this with observations, in addition to the study using synthetic spectra which is already cited?
L346,355,357 – “loglad” and “ladlog” are the same thing, right? Please clarify. These are not clearly defined.
Fig 6 – I don’t understand the purpose of showing Fig 6. Is it just to show the values of “A”? These are already stated in the text, and I barely see any difference between the different colored lines (I think that is to show insensitivity to the sampling, perhaps?). Maybe I am missing something here.
Sec 4.4.2 – Similar to my previous comments, are there any other studies that have tried different inertial ranges, especially with field observations? Or that have found deficiencies with a method or specifications that are not recommended in the present manuscript? While I agree the results of Bluteau 2025 for synthetic observations cited here and elsewhere are very relevant, I think additional background would strengthen some of the recommendations.
Also, I’m curious if there was a distinction or impact on performance when only short segments had a good-enough fit to be classified in the inertial range, as opposed to cases where longer segments fit k^-5/3.
L504 – A=17 is dependent on the estimates you made, and not universal, correct? That is implied at L507, I think. I suggest to reword here (and in the other subsections of 4.6 where applicable) whether the suggested thresholds are expected to be reasonable in all cases.
Sec 4.6.6 – Wouldn’t anisotropy potentially also influence the spectral shape? Also, related to this, it might be worth mentioning that these flags are not exclusive at the start of section 4.6.
L563 – I don’t quite understand why this flag was not applied. Is it because phase unwrapping was easily rectified (as I think the earlier text might suggest)?
Fig 11e – Like Fig 4, I suggest to put a box around the legend as the legend symbol for “flagged” is hard to distinguish from the actual flagged values. This could also be done for other panels in this figure.
Citation: https://doi.org/10.5194/egusphere-2025-4433-RC2 -
RC3: 'Comment on egusphere-2025-4433', Anonymous Referee #3, 19 Nov 2025
This paper provides a standardized method of processing measurements of epsilon from point velocimeters. It provides clearcut recommendations for all aspects of processing, from QC and processing of velocity data through the fitting of the inertial subrange spectra to retrieve epsilon. This paper, and papers like it, provide a valuable opportunity to make turbulence measurements more accessible to the oceanographic community.
To echo other reviewers, I think code examples of your processing methodology would be extremely useful in allowing readers to actually implement your suggested best practices. Please consider it. A similar field turbulence methods paper (Zippel et al; doi: 10.1175/JTECH-D-21-0005.1) provided their code. It doesn’t need to be perfectly commented or organized!
There are many different ways to approach parts of the analysis such as spectral fitting and selection of the wavenumbers that define the inertial subrange. You make a case for your chosen method, but I don’t think that enough evidence is always given to justify your choices as “best practices”. Please see specific instances in the line-by-line comments.
Line-by-line comments
Line 26: Is it possible to briefly explain the physical reason that microstructure profilers are better suited for low energy environments and point-velocity better for high-energy?
Line 57: Since Shcherbina et al, 2018, at least a couple of papers have shown pulse-coherent ADCPs to be a viable method for obtaining field measurements of dissipation rate, so perhaps the phrasing “on the cusp” is inaccurate. Please see Zippel et al. (2021); “Moored Turbulence Measurements Using Pulse-Coherent Doppler Sonar”; doi: 10.1175/JTECH-D-21-0005.1
Line 70 u_rms should be defined
Line 76: Please define and explain “ambiguity velocity”.
Line 84: Because you go on to also describe the viscous subrange and “large turbulent scales”, perhaps a description of the turbulence cascade would be helpful here.
Line 94: “The largest scales [of the inertial subrange]” – please specify in order to avoid confusion with the largest scales of turbulence in general.
Line 127: Do you mean “highest” rather than “high”?
Line 203: I think it would be worth going into more detail on how unwrapping can be performed, or at least providing a reference to a paper that does. Some wrapping is unavoidable, and even datasets with a lot of wrapping can still be fully usable once corrected.
Line 204 More detail on selecting the max velocity during programming in order to avoid wrapping should be given. For example, if you’ve never sampled in a particular environment before, how should you best estimate the velocity range? Perhaps you could refer to the velocity ranges given in Table 1 as examples of what a user could expect across different environments.
Line 330: Even though these methods are detailed in Bluteau (2025), it would be good to at least summarize what they are here.
Line 333: Why was this method chosen of the 6? Perhaps more details should be given on the comparison analysis between the 6 stated methods?
Line 365: How does a user decide on a value for A? Perhaps mention that you describe this further in Section 4.6.4. Also, why not base flagging directly on the deviation of the spectral slope from the theoretical -5/3? (I am not suggesting that is better, I’m just having some trouble following how A is derived)
Line 375: Here, does log-transformed spectra refer to the (synthetic) observed spectra (Psi^) , and model to the theoretical spectra with the -5/3 slope (Psi)?
Line 376: Why is this particular strategy recommended?
Line 399: Am I understanding that Figure 8 is showing that the introduction of noise does not cause substantial deviations in the ratio of theoretical to measured epsilon? I’m not sure that this alone is enough to justify your method of treating noise is optimal.
Figure 8: Please define k, L_K, epsilon, and epsilon_m in the caption. Interpretation of figures is greatly helped by avoiding the reader having to hunt through the text fo variable definitions.
Citation: https://doi.org/10.5194/egusphere-2025-4433-RC3 -
EC1: 'Comment on egusphere-2025-4433', Karen J. Heywood, 19 Nov 2025
Many thanks to the three reviewers for their helpful and constructive reviews. I encourage the authors to respond to the reviews here in the online discussion, and to revise their manuscript accordingly. This best practices paper could be an excellent resource for the community.
Karen
Citation: https://doi.org/10.5194/egusphere-2025-4433-EC1
Data sets
NetCDF templates and code for creating and loading ATOMIX ADV benchmark datasets C. E. Bluteau et al. https://doi.org/10.5281/zenodo.16798905
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 715 | 55 | 13 | 783 | 21 | 23 |
- HTML: 715
- PDF: 55
- XML: 13
- Total: 783
- BibTeX: 21
- EndNote: 23
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
This submission provides guidance and recommendations for the observation and estimate of turbulent dissipation from single-point velocity time series that were developed by the SCOR Working Group #160 to quantify ocean mixing. It is exclusively a methods-type manuscript and does not report new science or observations, and is founded upon previous publications by the lead author such as Bluteau et al. (2011) with which there is some overlap. This contribution provides useful guidance, but there are a number of mostly minor clarifications that should be addressed before publication.
Figure 2: the caption should be revised to explain the multiple frequency axes above the figure that correspond to different mean advection speed. Also, there are inconsistencies between terminology used for units in this figure: m s ^-1 in the upper axes, m/s in the y-axis.
Figure 3: I’d suggest updating the caption for this figure to improve clarity. Describe the pathway around the figure more explicitly perhaps with the aid of a few more labels. Also, cross-reference it more thoroughly with the main text when the stages are being discussed.
Figure 4: in the caption, please can you explain why a power of 0.2 was applied to the number of samples n in each histogram bin.
Figure 5: could I suggest increasing the line thickness of the length scales and durations to aid cross-correlating them with the colorbars.
Line 355: logLAD undefined
Line 561: typo “…velocities samples…”
Figure 11: please clarify the colouring and shading of the markers in panel (a). The legend for flagged data in panel (b) overlaps the plotted data and is confusing.
It is very helpful that the authors provide sample datasets via the supplementary repository, but have they considered providing some sample Matlab or Python code to perform some of the more standardised aspects of the data preparation and analysis process?