the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Quantitative estimate of sources of uncertainty in drone-based methane emission measurements
Abstract. Site level measurements of methane emissions are used by operators for reconciliation with bottom-up emission inventories with the aim to improve accuracy, thoroughness and confidence in reported emissions. In that context it is of critical importance to avoid measurement errors, and to understand the measurement uncertainty. Remotely piloted aircraft systems (commonly referred to as ‘drones’) can play a pivotal role in the quantification of site-level methane emissions. Typical implementations use the ‘mass balance method’ to quantify emissions, with a high-precision methane sensor mounted on a quadcopter drone flying in a vertical curtain pattern; the total mass emission rate can then be computed post hoc from the measured methane concentration data and simultaneous wind data. Controlled release tests have shown that errors with the mass balance method can be considerable. For example, Liu et al. (2023) report absolute errors for more than 100 % for the two drone solutions tested; on the other hand, errors can be much smaller, of the order of 16 % root-mean-square errors in Corbett & Smith (2022), if additional constraints are placed on the data, restricting the analysis to cases where the wind field was steady.
In this paper we present a systematic error analysis of physical phenomena affecting the error in the mass balance method for parameters related to the acquisition of methane concentration data and to postprocessing. The sources of error are analysed individually, and it must be realised that individual errors can accumulate in practice, and they can also be augmented by other sources that are not included in the present work. Examples of these sources include the uncertainty in methane concentration measurements by a sensor with finite precision or the method used to measure the unperturbed wind velocity at the position of the drone. We find that the most important source of error considered is the horizontal and vertical spacings in the data acquisition as a coarse spacings can results in missing a methane plume. The potential error can be as high as 100 % in situations where the wind speed is steady and the methane plume has a coherent shape, contradicting the intuition of some operators in the industry. The likelihood of the extent of this error can be expressed in terms of a dimensionless number defined by the spatial resolution of the methane concentration measurements and the downwind distance from the main emission sources. The learnings from our theoretical error analysis are then applied to a number of historical measurements in a controlled release setting. We show how the learnings on the main sources of error can be used to eliminate potential errors during the postprocessing of flight data. Second, we evaluate an aggregated data set of 1,001 historical drone flights; our analysis shows that the potential errors in the mass balance method can be of the order of 100 % on occasions, even though the individual errors can be much smaller in the vast majority of the flights. The discussion section provides some guidelines to industry on how to avoid or minimize potential errors in drone measurements for methane emission quantification.
- Preprint
(12942 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-1175', Joseph Pitt, 15 Aug 2024
This study provides a thorough assessment of the sampling errors associated with facility-level mass balance emission estimates made using drones. This approach to estimating emissions is becoming increasingly popular due to a combination of improved lightweight measurement technology and an increased focus on facility-level reporting by site operators (e.g. through OGMP2.0). It is therefore important to understand the various sources of uncertainty associated with these estimates so as to better design future mass balance experiments and to interpret the results. This manuscript represents a valuable contribution to this effort, and offers useful practical guidance for future studies. The manuscript is very well written and clearly presented. I suggest that it should be published in AMT once the following minor issues have been addressed.
The only substantial addition to the paper that I think is necessary is a discussion on how the background concentration was determined for the real-life examples. There are many ways that this can be determined, either statistically (e.g. some percentile) or spatially (e.g. from grid cells at the end of the plane), and it can be determined separately for individual transects/curtains or a single background can be used for the whole flight. Maybe the enhancements are large enough that different choices of background don’t matter in these cases, but even if that is the case it is important to explain how the background was determined. I think it would also be useful to demonstrate the impact of some different plausible background calculation methods, even if only to show that it doesn’t matter here.
Specific comments:
L18 – typo “a course spacings”
L37-39 – I can’t quite work out how these values relate to table 3 in Saunois et al. The 60% anthropogenic value is presumably from the top-down estimates, but the 13% value for oil and gas must be derived from the bottom-up values? It would be good to add some more context, and also to emphasise the large uncertainties associated with these values.
L43 – check sentence grammar
L66 – remove second “variations”
L224-225 – remove “correspond” or “relate”
L319 – why do large variations in wind speed always lead to an underestimate? This isn’t immediately obvious to me so it would be good to expand on it here.
L429 – should this be “blue to yellow”?
L431-432 – should this be “orange circles”?
Figures 11 and 12 – references to these figures in the text on page 23 are mixed up in multiple places. Also in Figure 12a, why do all the simulations have an error of 3.25%?
Figure B2 – the discussion of this figure confuses me. The text says that the error decreases towards the left of the figure, but that does not appear to be the case.
A few figures (e.g. Fig. 4, Fig5) have slightly clipped legends/axes labels and would look neater if replotted
Citation: https://doi.org/10.5194/egusphere-2024-1175-RC1 -
AC1: 'Reply on RC1', Rutger IJzermans, 24 Oct 2024
Dear Dr Pitt,
Many thanks for your review. Please find attached a document that addresses your questions and remarks one by one.
Since there were a few common remarks between yourself and Reviewer #2, as well as a question that we think is best addressed to the Editor, we have decided to write one document to address all items at once; see document attached.
Our revised manuscript will be available shortly on this web portal, too.
Best regards,
Rutger IJzermans
-
AC1: 'Reply on RC1', Rutger IJzermans, 24 Oct 2024
-
RC2: 'Comment on egusphere-2024-1175', Steven van Heuven, 27 Sep 2024
This manuscript purports to present a rigorous characterization of the effects of various sources of error on the estimates of rates of point source emission of trace gases, ostensibly in order to assign post-hoc uncertainties to such estimates. Undoubtedly these uncertainties are currently not well established and direly needed, so an attempt to provide an analytical framework is laudable and much appreciated.Having said that, I find it hard to assess how much of a /general/ framework is offered here. The derived uncertainties appear to pertain – at least in part – to the measurement, simulation (in part idealized) and processing methodologies that are particular to this study. I believe the manuscript may benefit from more clearly delineating between theory and practice (some suggestions below), and striking a more modest tone where it comes to applicability of its finding. Certainly, the opening wording of the discussion ("[the presented framework] allows to back-calculate errors and uncertainties in historic data") reads a little presumptuous. Other sections, on the other hand (for instance the abstract) are more balanced.The writing style of the manuscript is excellent. More importantly, the manuscript features considerable rigor in physics and math and some interesting normalization and scaling ideas. These may well aid readers in their own assessment of mass fluxes from UAV measurements. I therefore recommend publication in AMT, with the request that the following points are addressed.%%%%%%%%%%%%%%I balk a bit at the "the authors have no competing interests"-statement; all authors work for commercial vendors or benificiaries of the technology that through this publication obtains increased credibility. If that's not a competing interest I do not know what is...*** TitleAs frankly admitted by authors, several major sources of uncertainty are not covered in this paper. Obviously, that cannot be expected of any single study. Nonetheless, consider reflecting that in the title, which currently reads rather all-encompassing. E.g., use "*several* sources of unc.".*** Section 2:L86. Not that it matters much, but do you mean "z=0 at the emitter" or "z=0 at ground level" (i.e., same as z=Z)? If you mean the latter, consider writing the latter. "z = 0 at ground level" is clear, while "z=H where H is source height" is (to me) confusing.*** Section 3:In section 3.2.2. I suspect that the choice of nearest-neightbor interpolation for filling the grid cells of your curtain may make your method susceptible to bias from 'missing the plume' in a way that using other interpolation methods (i.e., kriging) would not. As a user I would be very much helped by an assessment of, say, kriging-vs-nn_interp. A rigorous treatment of that may be beyond the scope of this manuscript, but a qualitative treatment of the pros and cons of nn_interp compared to objective mapping, and why you chose nn_interp, may be warranted here. Also, please provide visual example(s) of the resulting interpolation of a flight trajectory, ideally one for "crossing" and one for "missing" the plume.In interpreting section 3.4, this reader misses detail on how the wind field variability-modeling is set up, and there's no specific reference to get started with. Section 3.4 sort of suggests that during each of 300 models runs, continuously emitted methane is carried from the source by a time-varying wind field. That may produce realistically varying, patchy concentrations at the curtain /within/ each simulated flight, and that would be great (and hard to work with). But from the methods-section, I get the idea that you perform 300 runs /between/ which the wind speeds differ and the idealized gaussian plume is located at slightly different locations every run. Such data would be much easier to work with, but may present an underestimate of the effect of wind field variability. Consider expanding and/or illustrating what a typical simulated curtain looks like, to aid the reader in interpretation of your results.On that note: there's a bit of a surprising scale difference between the modeling setup (5 kg/h, 5 m distance) and the purported use in UAV work, which would typically necessitate larger distances. In the modeling setup, there is (as I understand it) a ~1 second travel time from source to curtain, which would not allow for patchiness to develop. Only a sort-of-random-walking circle could exist, whether within or between runs - correct? (ah, all this is mentioned later on nevermind).I believe that in particular this treatment of wind variability does not translate very well to practice, and that should be noted more clearly right away, not only later on in 3.5 ("other potential sources").L319 Please provide a rationale for why the emissions estimate goes down with wind field var, similar to the (satisfactory!) explanation for the 'curtain angle effect' in fig. 5.*** Section 4.1:There is no mention here of the background concentration – was that provided by ScAv? If not, how was that obtained, and what uncertainty is associated with the choice of background? Some later on for the SeekObs flights.Figures 7a & 8abcd. Please indicate the wind direction (or, why not switch to the "ENZ"-system, as suggested L83-87?). For 8abcd, consider using identical perspectives - now they're all rotated, and it's hard to know how they compare. Same for color scale. Consider projecting the 'shadows' of the points onto the side and bottom walls of the domain, so reader gets a better sense of the 3D-structure of the point cloud/curtain. Is the source located at x,y,z = 0,0,0? Figure C2 and L86 certainly seem to suggest so. If so, was a sensor-carrying (i.e., big) UAV really flown just a few meters downwind of a ground-level source? Were these curtains not flown perpendicular to the wind? I have many questions about these figures. Please improve to clarify.L357. "Curtains have been chosen such that they contain elevated concentrations and do not contain conflicting concentration values" (and then they match markedly well with the known rate). Judging from figure 9, you've discarded the latter half the dataset for that flight, which clearly contains elevated concentrations. The basis for discarding is thus the "confliction" criterion, which is not further detailed. Please do, to avoid being criticized for cherry-picking. Also, I think this criterion should be very self-explanatory, or to be shown to work well on the other datasets. If this is ad-hoc selection of curtains that work well, that certainly deserves mention (!).Table 2: please indicate what % of the data was discarded and the basis for that.L361 "overall average" - delete "overall"?*** Section 4.2:In general, I'm missing a description of how well the framework you developed applied to the 1001 flight of SeekOps. Were you able to find flight level spacings, curtains and the like for all 1001 flights, or was a certain percentage of the flights simply unprocessable? And was that all manual labour, or did you develop code to plow through the flights autonomously? If manual labour, how much "expert judgement" was needed and of what type? Just outlier-removal or more involved things like the "curtain selection" of 4.1?Figure 9: Consider adding the words "Contours indicate ... etc. " to the start of the first line of the caption.Figure B1 (nitpicking): if the plume maximum value is normalized (see caption) to, supposedly, "1", why is it all yellow and no detail on the inside? The color suggests a much higher peak value in the middle of the plume. Consider adjusting or elaborating.*** Section 5 and 6Nice discussion and conclusions. Consider discussing how the main finding (strong dependence on vertical/horizontal spacing) might change with use of another mapping technique. Do you think you conclusion is 'universal'? (cf. L364-366).Note that the "there is an optimum though"-sentences are present in almost (?) identical wording on both sections.Citation: https://doi.org/
10.5194/egusphere-2024-1175-RC2 -
AC2: 'Reply on RC2', Rutger IJzermans, 24 Oct 2024
Dear Dr Van Heuven,
Many thanks for your review. Please find attached a document that addresses your questions and remarks one by one.
Since there were a few common remarks between yourself and Reviewer #1, as well as a question that we think is best addressed to the Editor, we have decided to write one document to address all items at once; see document attached.
Our revised manuscript will be available shortly on this web portal, too.
Best regards,
Rutger IJzermans
-
AC2: 'Reply on RC2', Rutger IJzermans, 24 Oct 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-1175', Joseph Pitt, 15 Aug 2024
This study provides a thorough assessment of the sampling errors associated with facility-level mass balance emission estimates made using drones. This approach to estimating emissions is becoming increasingly popular due to a combination of improved lightweight measurement technology and an increased focus on facility-level reporting by site operators (e.g. through OGMP2.0). It is therefore important to understand the various sources of uncertainty associated with these estimates so as to better design future mass balance experiments and to interpret the results. This manuscript represents a valuable contribution to this effort, and offers useful practical guidance for future studies. The manuscript is very well written and clearly presented. I suggest that it should be published in AMT once the following minor issues have been addressed.
The only substantial addition to the paper that I think is necessary is a discussion on how the background concentration was determined for the real-life examples. There are many ways that this can be determined, either statistically (e.g. some percentile) or spatially (e.g. from grid cells at the end of the plane), and it can be determined separately for individual transects/curtains or a single background can be used for the whole flight. Maybe the enhancements are large enough that different choices of background don’t matter in these cases, but even if that is the case it is important to explain how the background was determined. I think it would also be useful to demonstrate the impact of some different plausible background calculation methods, even if only to show that it doesn’t matter here.
Specific comments:
L18 – typo “a course spacings”
L37-39 – I can’t quite work out how these values relate to table 3 in Saunois et al. The 60% anthropogenic value is presumably from the top-down estimates, but the 13% value for oil and gas must be derived from the bottom-up values? It would be good to add some more context, and also to emphasise the large uncertainties associated with these values.
L43 – check sentence grammar
L66 – remove second “variations”
L224-225 – remove “correspond” or “relate”
L319 – why do large variations in wind speed always lead to an underestimate? This isn’t immediately obvious to me so it would be good to expand on it here.
L429 – should this be “blue to yellow”?
L431-432 – should this be “orange circles”?
Figures 11 and 12 – references to these figures in the text on page 23 are mixed up in multiple places. Also in Figure 12a, why do all the simulations have an error of 3.25%?
Figure B2 – the discussion of this figure confuses me. The text says that the error decreases towards the left of the figure, but that does not appear to be the case.
A few figures (e.g. Fig. 4, Fig5) have slightly clipped legends/axes labels and would look neater if replotted
Citation: https://doi.org/10.5194/egusphere-2024-1175-RC1 -
AC1: 'Reply on RC1', Rutger IJzermans, 24 Oct 2024
Dear Dr Pitt,
Many thanks for your review. Please find attached a document that addresses your questions and remarks one by one.
Since there were a few common remarks between yourself and Reviewer #2, as well as a question that we think is best addressed to the Editor, we have decided to write one document to address all items at once; see document attached.
Our revised manuscript will be available shortly on this web portal, too.
Best regards,
Rutger IJzermans
-
AC1: 'Reply on RC1', Rutger IJzermans, 24 Oct 2024
-
RC2: 'Comment on egusphere-2024-1175', Steven van Heuven, 27 Sep 2024
This manuscript purports to present a rigorous characterization of the effects of various sources of error on the estimates of rates of point source emission of trace gases, ostensibly in order to assign post-hoc uncertainties to such estimates. Undoubtedly these uncertainties are currently not well established and direly needed, so an attempt to provide an analytical framework is laudable and much appreciated.Having said that, I find it hard to assess how much of a /general/ framework is offered here. The derived uncertainties appear to pertain – at least in part – to the measurement, simulation (in part idealized) and processing methodologies that are particular to this study. I believe the manuscript may benefit from more clearly delineating between theory and practice (some suggestions below), and striking a more modest tone where it comes to applicability of its finding. Certainly, the opening wording of the discussion ("[the presented framework] allows to back-calculate errors and uncertainties in historic data") reads a little presumptuous. Other sections, on the other hand (for instance the abstract) are more balanced.The writing style of the manuscript is excellent. More importantly, the manuscript features considerable rigor in physics and math and some interesting normalization and scaling ideas. These may well aid readers in their own assessment of mass fluxes from UAV measurements. I therefore recommend publication in AMT, with the request that the following points are addressed.%%%%%%%%%%%%%%I balk a bit at the "the authors have no competing interests"-statement; all authors work for commercial vendors or benificiaries of the technology that through this publication obtains increased credibility. If that's not a competing interest I do not know what is...*** TitleAs frankly admitted by authors, several major sources of uncertainty are not covered in this paper. Obviously, that cannot be expected of any single study. Nonetheless, consider reflecting that in the title, which currently reads rather all-encompassing. E.g., use "*several* sources of unc.".*** Section 2:L86. Not that it matters much, but do you mean "z=0 at the emitter" or "z=0 at ground level" (i.e., same as z=Z)? If you mean the latter, consider writing the latter. "z = 0 at ground level" is clear, while "z=H where H is source height" is (to me) confusing.*** Section 3:In section 3.2.2. I suspect that the choice of nearest-neightbor interpolation for filling the grid cells of your curtain may make your method susceptible to bias from 'missing the plume' in a way that using other interpolation methods (i.e., kriging) would not. As a user I would be very much helped by an assessment of, say, kriging-vs-nn_interp. A rigorous treatment of that may be beyond the scope of this manuscript, but a qualitative treatment of the pros and cons of nn_interp compared to objective mapping, and why you chose nn_interp, may be warranted here. Also, please provide visual example(s) of the resulting interpolation of a flight trajectory, ideally one for "crossing" and one for "missing" the plume.In interpreting section 3.4, this reader misses detail on how the wind field variability-modeling is set up, and there's no specific reference to get started with. Section 3.4 sort of suggests that during each of 300 models runs, continuously emitted methane is carried from the source by a time-varying wind field. That may produce realistically varying, patchy concentrations at the curtain /within/ each simulated flight, and that would be great (and hard to work with). But from the methods-section, I get the idea that you perform 300 runs /between/ which the wind speeds differ and the idealized gaussian plume is located at slightly different locations every run. Such data would be much easier to work with, but may present an underestimate of the effect of wind field variability. Consider expanding and/or illustrating what a typical simulated curtain looks like, to aid the reader in interpretation of your results.On that note: there's a bit of a surprising scale difference between the modeling setup (5 kg/h, 5 m distance) and the purported use in UAV work, which would typically necessitate larger distances. In the modeling setup, there is (as I understand it) a ~1 second travel time from source to curtain, which would not allow for patchiness to develop. Only a sort-of-random-walking circle could exist, whether within or between runs - correct? (ah, all this is mentioned later on nevermind).I believe that in particular this treatment of wind variability does not translate very well to practice, and that should be noted more clearly right away, not only later on in 3.5 ("other potential sources").L319 Please provide a rationale for why the emissions estimate goes down with wind field var, similar to the (satisfactory!) explanation for the 'curtain angle effect' in fig. 5.*** Section 4.1:There is no mention here of the background concentration – was that provided by ScAv? If not, how was that obtained, and what uncertainty is associated with the choice of background? Some later on for the SeekObs flights.Figures 7a & 8abcd. Please indicate the wind direction (or, why not switch to the "ENZ"-system, as suggested L83-87?). For 8abcd, consider using identical perspectives - now they're all rotated, and it's hard to know how they compare. Same for color scale. Consider projecting the 'shadows' of the points onto the side and bottom walls of the domain, so reader gets a better sense of the 3D-structure of the point cloud/curtain. Is the source located at x,y,z = 0,0,0? Figure C2 and L86 certainly seem to suggest so. If so, was a sensor-carrying (i.e., big) UAV really flown just a few meters downwind of a ground-level source? Were these curtains not flown perpendicular to the wind? I have many questions about these figures. Please improve to clarify.L357. "Curtains have been chosen such that they contain elevated concentrations and do not contain conflicting concentration values" (and then they match markedly well with the known rate). Judging from figure 9, you've discarded the latter half the dataset for that flight, which clearly contains elevated concentrations. The basis for discarding is thus the "confliction" criterion, which is not further detailed. Please do, to avoid being criticized for cherry-picking. Also, I think this criterion should be very self-explanatory, or to be shown to work well on the other datasets. If this is ad-hoc selection of curtains that work well, that certainly deserves mention (!).Table 2: please indicate what % of the data was discarded and the basis for that.L361 "overall average" - delete "overall"?*** Section 4.2:In general, I'm missing a description of how well the framework you developed applied to the 1001 flight of SeekOps. Were you able to find flight level spacings, curtains and the like for all 1001 flights, or was a certain percentage of the flights simply unprocessable? And was that all manual labour, or did you develop code to plow through the flights autonomously? If manual labour, how much "expert judgement" was needed and of what type? Just outlier-removal or more involved things like the "curtain selection" of 4.1?Figure 9: Consider adding the words "Contours indicate ... etc. " to the start of the first line of the caption.Figure B1 (nitpicking): if the plume maximum value is normalized (see caption) to, supposedly, "1", why is it all yellow and no detail on the inside? The color suggests a much higher peak value in the middle of the plume. Consider adjusting or elaborating.*** Section 5 and 6Nice discussion and conclusions. Consider discussing how the main finding (strong dependence on vertical/horizontal spacing) might change with use of another mapping technique. Do you think you conclusion is 'universal'? (cf. L364-366).Note that the "there is an optimum though"-sentences are present in almost (?) identical wording on both sections.Citation: https://doi.org/
10.5194/egusphere-2024-1175-RC2 -
AC2: 'Reply on RC2', Rutger IJzermans, 24 Oct 2024
Dear Dr Van Heuven,
Many thanks for your review. Please find attached a document that addresses your questions and remarks one by one.
Since there were a few common remarks between yourself and Reviewer #1, as well as a question that we think is best addressed to the Editor, we have decided to write one document to address all items at once; see document attached.
Our revised manuscript will be available shortly on this web portal, too.
Best regards,
Rutger IJzermans
-
AC2: 'Reply on RC2', Rutger IJzermans, 24 Oct 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
304 | 107 | 100 | 511 | 5 | 8 |
- HTML: 304
- PDF: 107
- XML: 100
- Total: 511
- BibTeX: 5
- EndNote: 8
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1