the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Methane Point Source Quantification Using MethaneAIR: A New Airborne Imaging Spectrometer
Abstract. The MethaneSAT satellite instrument and its aircraft precursor, MethaneAIR, are imaging spectrometers designed to measure methane concentrations with wide spatial coverage, fine spatial resolution, and high precision compared to currently deployed remote sensing instruments. At 12960 m cruise altitude above ground (13850 above sea level), MethaneAIR datasets have a 4.5 km swath gridded to 10m x 10m pixels with 17–20 ppb standard deviation on a flat scene. It was deployed in the summer of 2021 in the Permian Basin to test the accuracy of the retrieved methane concentrations and emission rates using the algorithms developed for MethaneSAT. We report here point source emissions obtained during a single-blind volume controlled release experiment, using two methods: (1) The modified Integrated Mass Enhancement (mIME) method estimates emission rates using the total mass enhancement of methane in an observed plume combined with winds obtained from Weather Research Forecast driven by High-Resolution Rapid Refresh meteorological data in Large Eddy Simulations mode (WRF-LES-HRRR). WRF-LES-HRRR simulates winds in stochastic eddy-scale (100–1000 m) variability, which is particularly important for low-wind conditions and informing the error budget. The mIME can estimate emission rates of plumes of any size that are detectable by MethaneAIR. (2) The Divergence Integral (DI) method applies Gauss’s theorem to estimate the flux divergence fields through a series of closed surfaces enclosing the sources. The set of boxes grows from the upwind side of the plume through the core of each plume and downwind. No selection of inflow concentration, as used in the mIME, is required. The DI approach can efficiently determine fluxes from large sources and clusters of sources but cannot resolve small point emissions. These methods account for the effects of eddy-scale variation in different ways: the DI averages across many eddies, whereas the mIME re-samples many eddies from the LES simulation; they also use different wind products. Emissions estimates from both the mIME and DI methods agreed closely with the blinded-volume controlled releases experiments (N = 21). The York regression between the estimated emissions and the released emissions has a slope of 0.96 [0.84, 1.08], R = 0.83 and N = 21, with 30 % mean percentage error for the whole data set, which indicates that MethaneAIR can quantify point sources emitting more than 200 kg/hr for the mIME and 500 kg/hr for the DI method. The two methods also agreed on methane emission estimates from various uncontrolled sources in the Permian Basin. The experiment thus demonstrates the powerful potential of our instruments for remote sensing and quantification of methane emissions.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1309 KB)
-
Supplement
(1851 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1309 KB) - Metadata XML
-
Supplement
(1851 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-822', Anonymous Referee #1, 30 Jun 2023
Methane Point Source Quantification Using MethaneAIR: A New Airborne Imaging Spectrometer
General comments:
This valuable case study analyses data from the airborne imaging spectrometer MethaneAIR, the aircraft precursor instrument for MethaneSAT. In addition to a comprehensive explanation of the methane retrieval algorithms, backed up by a detailed supplement, it contains two interesting and important highlights: overflights of a controlled methane release experiment to strengthen the confidence in the method and to quantify its detection limit, and complementary atmospheric modelling to handle the issue of methane plumes being distorted by atmospheric turbulence. The paper is well written and presents helpful figures and tables. I have the following minor comments.
Specific Comments:
Introduction: it is good to hear about the positive development towards MethaneSAT, yet we should not forget that imaging spectroscopy has limitations. The intro would benefit from short explanations on how MethaneAIR/SAT will cope with variable or low surface albedo, the presence of aerosol in some of the plumes, the necessity to use CO2 as a retrieval proxy, and the necessary instrument design tradeoff between high spectral and high spatial measurement resolution. Also, it is OK to cite many similar remote sensing techniques, yet IPDA lidar is missing.
Section 2.1: you could/should mention that tropopause height variations influence the methane column due to less methane in the stratosphere. This may play a role at HIAPER flight altitudes and will certainly affect MethaneSAT data.
Figures 1 and 2 would benefit from displaying a km scale and an arrow indicating the approximate wind direction.
Line 140: … 1.5 standard deviations of the inflow values, above the median value of the inflow…
Line 141: … setting values below that value to NA.
Equations 1 and 2: for the sake of logic I would place Eq 2 before Eq 1.
Lines 158-160: are a repetition of lines 150-152; should be removed.
Line 219: “… a replacement for each synthetic observation.” I do not understand this. Please explain more precisely with what you replace the original inflow pixels.
Section 2.6.1: at which altitude (agl) was the methane released? The altitude plays a role in subsequent (turbulent) mixing and plume development. The LES release altitude should be selected accordingly.
Line 324: Typical emission rates were …
Figure 4: there is an inconsistency with Line 341 where you state that the correlation statistics are computed for emissions exceeding 500 kg/hr. The figure also shows emissions < 500 kg/hr. You should at least display those differently, or remove them from the figure. Also, I do not understand the red square: why does it represent “no detection” since it lies at about 700 kg/hr and therewith well above any detection limit?
Line 395: the reference for Chan Miller et al is missing.
Supplement, S1.1: I am afraid that the coefficients a and b depend on the ambient weather, or more precisely, on the turbulence conditions. Then, the relationship between U_eff and U_10m would lose its universality?
S1.1: what do you mean by overfitting? This term is not familiar to me.
S1.2, before equation S7: Combining equations S5 and S6… (not A5 and A6)
S1.3: I am afraid that the ratio method is only applicable under the condition that the wind velocity is correct?
Figures S1 and S2: add the Stanford release rates for reference. I would plot lines with symbols instead of bars. Maybe merge both figures for better comparison? What do you mean with “endpoints”?
Figures S5 and S6: maybe merge both figures for better comparison, plotting lines instead of bars?
Citation: https://doi.org/10.5194/egusphere-2023-822-RC1 -
AC1: 'Reply on RC1', Apisada Chulakadabba, 30 Jun 2023
Thank you for your constructive feedback. We agree that a concise discussion of the performance questions/limitations should be added to the paper. We appreciate your time and will carefully address all your comments in the revision.
Citation: https://doi.org/10.5194/egusphere-2023-822-AC1
-
AC1: 'Reply on RC1', Apisada Chulakadabba, 30 Jun 2023
-
RC2: 'Comment on egusphere-2023-822', Anonymous Referee #2, 10 Jul 2023
Review of “Methane Point Source Quantification Using MethaneAIR: A New Airborne Imaging Spectrometer” by Chulakadabba et al.
Chulakadabba et al. present a study demonstrating point source methane quantification from the MethaneAIR imaging spectrometer. The core of the work is evaluation by comparison with a controlled release experiment. Multiple point source quantification methods are evaluated, and the controlled release is used for evaluation and method development purposes. The paper is appropriately placed in AMT. Overall the paper is well written and represents important work in building confidence and robustness in MethaneAIR quantifications of methane point sources. I have a couple concerns outlined below. Once these are satisfactorily addressed publication would be warranted.
Conceptual/Larger concerns:
Missing information/reference on the retrieval: This paper is focused on the evaluation of quantification of point source emissions, which is a reasonable and appropriate scope. However, we are missing critical information on the retrieval product used to evaluate some elements of the manuscript. Chan Miller 2022 is cited repeatedly as a reference for the retrieval used, however there is not a journal or DOI listed with this reference. Extensive google scholar searching shows no such article. I do find two useful/relevant articles not cited, Conway et al., AMTD 2023 and Staebell et al., AMT 2021 that show the data processing and spectral calibration of MethaneAir respectively (those both should be cited here).
This leaves a rather problematic gap, as we have no information on the retrieval or how the XCH4 values are produced. Are these true total column values or just below the aircraft partial columns? What is the averaging kernel? This latter question is critically important as the quantification methods all do no discuss averaging kernel/sensitivity and implicitly assume that MethaneAir has a uniform averaging kernel, at least in the boundary layer. Is this true? Or should some correction be applied in the quantification to account for different sensitivity in the boundary layer (like in Wunch et al., GRL 2009)? I have further question on stability of retrieval and sensitivity to sampling – are all the flight legs getting the same background value and not have any aliasing that can manifest when combining the flight lines.
Slope fitting: I appreciate there is some nuanced discussion of York fitting versus OLS. For comparison with the controlled release, this is a scenario where OLS would strike me at first as most appropriate – the x-axis should be very well known and the y-axis have relatively large uncertainty. I’m surprised in Figure 3 that the error bars appear comparable on both axes – but I don’t know what those error bars represent and if they are the same. I’d like to see a little more concluding discussion on the slope methods. Right now it reads a bit as though the authors internally disagree on what slope method is correct, and presented both with a lean towards the York. But what should the community draw from this? To me, OLS would still be the correct approach for controlled releases provided the error is < 3 times that of the tested airborne/satellite method. In this example perhaps there is a special case where the metered release had higher error?
False positive/false negative rate?: Can anything be said about false positive or false negative rate from the data collected?
Care needed with extrapolation to MethaneSAT: At many times, particularly in the intro, there is an emphasis on the similarity and extensibility of this work to MethaneSAT. This needs to be presented with precision, as the work here demonstrates that these point source quantification approaches can work with appropriate imaging spectrometer data. So fair to say based on design specifications, this is extensible to MethaneSat, but we do not yet know what MethaneSat spectra will look like. (lines 41-43 for example). Also, it isn’t indicated in this paper, but I had the impression different detectors are used in MethaneSAT and MethaneAIR, which can lead to significant differences. If this is so, it should be acknowledged and the similarities between the two instruments should not be overstated.
Care needed with discussion of ‘independent’ winds, and extensibility of results/approach outside US where HRRR is not available: In the abstract (line 17) it is stated the independent winds are used with different approaches. In fact, HRRR is foundational in both the LES and the DI approaches. I recognize driving the LES with HRRR is not identical to using HRRR directly, but in the end the winds are based on the same input drivers. This should be clearly acknowledged. Further, the implications of this should be discussed as this is not an independent wind comparison. Also, HRRR is not available outside the US, so it should be discussed the impact that would/could happen for both LES and DI approaches with different/coarse resolution/less accurate wind fields. The impacts will be different for the two approaches. And the LES model will also be dependent on inputs for surface topography and energy balance which might be worse outside the US. There is a broader question embedded here in the scalability and extensibility of the LES approach.
Specific comments/line by line:
abstract line 17: states that LES and DI use different wind products – please correct this statement.
abstract concluding sentence: This work doesn’t demonstrate potential of “our instruments”. It demonstrates potential of methaneAIR and suggests that the quantification method should be transferrable to the satellite if the satellite meets design specs.
Around lines 35: Sentinel 2 , WorldView 3 were not designed for methane sensing but do it anyways. Also are missing PRISMA here. It is important to clarify whether the instrument was designed for methane use case or not. Actually aviris-ng was also not designed for methane either (though carbonmapper will be). One point being that sensors designed/dedicated to this purpose should outperform sensors reanalyzed for this.
line 41: “nearly identical spectroscopy”. Don’t the two instruments use different detectors? Don’t overstate the similarities.
line 43: Methanesat is designed to have the capabilities — but we do not yet know it will (fingers crossed).
line 47: Clarify, you are validating emissions estimates here, you are not validating concentrations in this paper.
line 65 typo “dein”
Line 74: Using HRRR is restricted to US. If wanting to do this elsewhere, what happens? Some discussion of input met field dependence/importance?
line 88-89: What is interference from nearby activity? Wouldn’t one say distinguishing amongst other activity important?
Table 1: Why start on RF04 - what happened with RF01-03?
What was successful flight fraction/instrument duty cycle?
Figure 1: Is this true total column XCH4 or partial column? What are gaps in Figure 1b? Maybe show faint flight lines so we can see the flight pattern.
lines 105-110: what is the impact of the gridding/smoothing choice on methane quantification? The gaussian filter is somewhat arbitrary and understanding the impacts of the smoothing on emissions estimates is important. What happens with no filter? What happens with filters that are more/less aggressive with different correlation lengths?
line 112-113: What are the implications of assuming source location was identified? This may be big real world challenge and more discussion in warranted.
2.3.1: Does the entire LES approach require knowledge of source site to run in its forward mode? How does this impact the feasibility to run for lots of unknown source locations and does this require first an algorithm to identify source location? Building off this and computational requirements, is this LES approach scalable?
line 147: Is the LES resolution coarser b/c of computation? Why even in this limited test/example cases could it not be run at comparable scale to the observations?
line 155: It is a bit unclear what you do for Ueff in the main text, is it the Varon equation or do you derive yourself from LES model? You state in the supplement you end up using Varon. Can you evaluate these assumptions/parameterization at all with the LES runs you have? What type of errors might be imposed?
What is the averaging kernel? All the flux quantification assume uniform averaging kernel of 1.
line 175: How is the wind speed rotated?
Figure 2: How are the different rectangles selected? Custom/by eye for each source? Why only step outward in direction of plume flow?
Could some variability be addition for other sources/sites? It looks possible from the disconnected plume in 2b.
Why does the flux increase after 700m, and why would that be the only place for other sources?
section 2.4: A lot of ‘we compared..’ but then not display of comparison or discussion of how that comparison is used. Leaves me wondering a bit what was learned/demonstrated with these comparisons.
In places diffuse sources capability is discussed but not really demonstrated with this paper.
line 264: Is methane air 10x10 or 20x20?
line 265: This would suggest the authors should then state that with this sample approach other sources within 1km impact ability to quantify emissions…
Please show the fully blinded 1:1 figure with slope/fitting as well as the after unblinding.
It’s not really fair to call the release range small b/c lower than 1,000 kg/hr — 1,000 kg/hr is a very large emissions rate.
Fig: 3. how is the point at 250 flagged as below detection limit when the measured emission was over 200, higher than three other non-flagged points?
Are the flagged points included in the fit?
What are the reported error bars shown here? is it true that the meter error on the ground is comparable to the retrieval? in Sherwin et al it looks like much smaller meter uncertainty
line 332: While invoking rapture is entertaining, I believe you mean rupture.
line 356: need to add with appropriate testing and validation of the satellite…
line 366: well, but it is computationally intensive and requires accurate input winds (HRRR) and accurate representation of surface topography and energy balance as well…
supplement:
S5: so what happens with negative values? since you use the average to subtract you will have a lot of negative values…
could you expand the discussion in the final two paragraphs? conceptually OLS would make the most sense. Unless metered errors were larger than anticipated and correlated with release rate the heteroskedasticity test is surprising.
Further question on quantification related to the retrieval: What about topography changing the column air mass, and the air mass in the pbl. How does that come into play in these quantification measures and does that need to be addressed/discussed?
Citation: https://doi.org/10.5194/egusphere-2023-822-RC2 -
AC2: 'Reply on RC2', Apisada Chulakadabba, 18 Jul 2023
Thank you so much R2 for your constructive comments. We appreciate your time and will carefully address all your comments in the revision. Below are some of the responses to your concerns.
Conceptual/Larger concerns:
Missing information/reference on the retrieval: This paper is focused on the evaluation of quantification of point source emissions, which is a reasonable and appropriate scope. However, we are missing critical information on the retrieval product used to evaluate some elements of the manuscript. Chan Miller 2022 is cited repeatedly as a reference for the retrieval used, however there is not a journal or DOI listed with this reference. Extensive google scholar searching shows no such article. I do find two useful/relevant articles not cited, Conway et al., AMTD 2023 and Staebell et al., AMT 2021 that show the data processing and spectral calibration of MethaneAir respectively (those both should be cited here).
This leaves a rather problematic gap, as we have no information on the retrieval or how the XCH4 values are produced. Are these true total column values or just below the aircraft partial columns? What is the averaging kernel? This latter question is critically important as the quantification methods all do no discuss averaging kernel/sensitivity and implicitly assume that MethaneAir has a uniform averaging kernel, at least in the boundary layer. Is this true? Or should some correction be applied in the quantification to account for different sensitivity in the boundary layer (like in Wunch et al., GRL 2009)? I have further question on stability of retrieval and sensitivity to sampling – are all the flight legs getting the same background value and not have any aliasing that can manifest when combining the flight lines.
>>>>> The Chan Miller 2023 should be submitted to AMT soon. I can share the current draft with R2. I agree that we missed the references to Conway et al., AMTD 2023, and Staebell et al., AMT 2021. We shall include them in the updated manuscript.
>>>>> It’s a true column comprising the paths from space to the ground and the ground to the aircraft (~ 12 km). However, since most of the excess methane is within the boundary layer (below the aircraft), we assume that the averaging kernel is applicable for the lowest kilometers. We will include a graph showing the averaging kernel, which is slightly larger near the surface than in the upper atmosphere. The detailed discussion should be in the Chan-Miller et al.
>>>> No, we didn’t have a uniform averaging kernel. We use multilayer retrievals that include the shape of the averaging kernels.
Slope fitting: I appreciate there is some nuanced discussion of York fitting versus OLS. For comparison with the controlled release, this is a scenario where OLS would strike me at first as most appropriate – the x-axis should be very well known and the y-axis have relatively large uncertainty. I’m surprised in Figure 3 that the error bars appear comparable on both axes – but I don’t know what those error bars represent and if they are the same. I’d like to see a little more concluding discussion on the slope methods. Right now it reads a bit as though the authors internally disagree on what slope method is correct, and presented both with a lean towards the York. But what should the community draw from this? To me, OLS would still be the correct approach for controlled releases provided the error is < 3 times that of the tested airborne/satellite method. In this example perhaps there is a special case where the metered release had higher error?
>>>>> Two main points that we want to convey.
- The community should be aware of the errors from the releases. In some cases (such as this controlled release experiment), errors are comparable. Thus OLS is inappropriate. We plan to add a second set of releases in 2022. The Stanford team had improved the instrumentation. In this new set of releases, their instrument errors are much smaller than our estimates.
- York Fit is more appropriate as the regression was designed to have errors for both the x-axis and y-axis variables. In the second case where the Stanford errors are close to being negligible, the YorkFit is still appropriate and gives similar answers to OLS.
False positive/false negative rate?: Can anything be said about false positive or false negative rate from the data collected?
>>>>> We can definitely say something about the rates, but we can’t generalize them for future cases. This is the limitation of any controlled release experiment.
Care needed with extrapolation to MethaneSAT: At many times, particularly in the intro, there is an emphasis on the similarity and extensibility of this work to MethaneSAT. This needs to be presented with precision, as the work here demonstrates that these point source quantification approaches can work with appropriate imaging spectrometer data. So fair to say based on design specifications, this is extensible to MethaneSat, but we do not yet know what MethaneSat spectra will look like. (lines 41-43 for example). Also, it isn’t indicated in this paper, but I had the impression different detectors are used in MethaneSAT and MethaneAIR, which can lead to significant differences. If this is so, it should be acknowledged and the similarities between the two instruments should not be overstated.
>>>>>>> Correct. MethaneSAT and MethaneAIR have two different detectors. We will add comments regarding the effects on the manuscript.
Care needed with discussion of ‘independent’ winds, and extensibility of results/approach outside US where HRRR is not available: In the abstract (line 17) it is stated the independent winds are used with different approaches. In fact, HRRR is foundational in both the LES and the DI approaches. I recognize driving the LES with HRRR is not identical to using HRRR directly, but in the end the winds are based on the same input drivers. This should be clearly acknowledged. Further, the implications of this should be discussed as this is not an independent wind comparison. Also, HRRR is not available outside the US, so it should be discussed the impact that would/could happen for both LES and DI approaches with different/coarse resolution/less accurate wind fields. The impacts will be different for the two approaches. And the LES model will also be dependent on inputs for surface topography and energy balance which might be worse outside the US. There is a broader question embedded here in the scalability and extensibility of the LES approach.
>>>>> Agree that we should acknowledge the fact that DI and mIME used similar HRRR driver winds. There is a broader question embedded here in the scalability and extensibility of the LES approach. We will also acknowledge that the LES might not be able to scale globally.
Specific comments/line by line:
abstract line 17: states that LES and DI use different wind products – please correct this statement.
>>>> Will do.
abstract concluding sentence: This work doesn’t demonstrate potential of “our instruments”. It demonstrates potential of methaneAIR and suggests that the quantification method should be transferrable to the satellite if the satellite meets design specs.
>>>> Will fix this.
Around lines 35: Sentinel 2 , WorldView 3 were not designed for methane sensing but do it anyways. Also are missing PRISMA here. It is important to clarify whether the instrument was designed for methane use case or not. Actually aviris-ng was also not designed for methane either (though carbonmapper will be). One point being that sensors designed/dedicated to this purpose should outperform sensors reanalyzed for this.
>>>> We can include PRISMA here. We will also make sure to mention whether the instrument was designed for methane use.
line 41: “nearly identical spectroscopy”. Don’t the two instruments use different detectors? Don’t overstate the similarities.
>>>> We will replace the word “nearly” with “very similar.”
line 43: Methanesat is designed to have the capabilities — but we do not yet know it will (fingers crossed).
>>>> We include this change.
line 47: Clarify, you are validating emissions estimates here, you are not validating concentrations in this paper.
>>>> We will fix this.
line 65 typo “dein”
>>>> We will fix this.
Line 74: Using HRRR is restricted to US. If wanting to do this elsewhere, what happens? Some discussion of input met field dependence/importance?
>>>>> We tried estimating the emissions using different wind products. It’s hard to generalize which product is better or worse than the others.
line 88-89: What is interference from nearby activity? Wouldn’t one say distinguishing amongst other activity important?
>>>>> In this case, interference from nearby sources means methane enhancement from nearby sources that can’t be separated from the source of interest.
Table 1: Why start on RF04 - what happened with RF01-03?
What was successful flight fraction/instrument duty cycle?
>>>>> RF04 is simply an identification of the research flight of the MethaneAIR campaign series. The first three research flights (RF01 - RF03 ) were successful engineering flights. Our main purposes weren’t for methane data collection, but rather the making sure that our instrument was working properly. We didn’t report data from the engineering flights.
Figure 1: Is this true total column XCH4 or partial column? What are gaps in Figure 1b? Maybe show faint flight lines so we can see the flight pattern.
>>>>> Those gaps are from filtered shadows, clouds, and water basins. We will include the flight tracks in the supplement.
lines 105-110: what is the impact of the gridding/smoothing choice on methane quantification? The gaussian filter is somewhat arbitrary and understanding the impacts of the smoothing on emissions estimates is important. What happens with no filter? What happens with filters that are more/less aggressive with different correlation lengths?
>>>>>> We have tried various smoothing/gridding options. They don’t really matter much. If we don’t do smoothing, we would not account for the spatial oversampling. We would get higher noise levels and harder to detect small emissions. We chose the Gaussian filter because it conserves mass and is a standard way to denoise an image.
line 112-113: What are the implications of assuming source location was identified? This may be big real world challenge and more discussion in warranted.
>>>>> By assuming that source locations were identified, we have a lower chance of having false positives. Identifying source locations is challenging for various reasons. In addition, our algorithm for plume detection is still developing. We want to be explicit that our plume detection work is not fully automated yet.
2.3.1: Does the entire LES approach require knowledge of source site to run in its forward mode? How does this impact the feasibility to run for lots of unknown source locations and does this require first an algorithm to identify source location? Building off this and computational requirements, is this LES approach scalable?
>>>>>> We need to specify the LES domain and ensure that the source is within that domain (10 km x 10 km at the moment, could be larger). We have a plume finder algorithm based on the DI approach to identify potential sources. Once the sources are identified by the plume finder algorithm, we can run LES for important targets. With the current setup, it only takes less than 4 hours to complete the run. The computational cost is much cheaper compared to the L2 data processing cost. Regarding scalability, once we automatically cross-check the DI-identified sources and the infrastructure inventories, we should be able to include the LES and the mIME in the pipeline.
line 147: Is the LES resolution coarser b/c of computation? Why even in this limited test/example cases could it not be run at comparable scale to the observations?
>>>> Correct, the compute time would be at least 25 times longer than the current setup. We did try running LES at ~ 12 m by 12 m resolution. The plumes look similar to the 111.111 m by 111.111 m resolution. So, we chose to use 111.11 m by 111.11 m to be consistent with the resolution we plan to use operationally. Note, this version of LES uses a closure approach in the boundary layer that renders it to be impossible to resolve plumes under 50 m scale.
line 155: It is a bit unclear what you do for Ueff in the main text, is it the Varon equation or do you derive yourself from LES model? You state in the supplement you end up using Varon. Can you evaluate these assumptions/parameterization at all with the LES runs you have? What type of errors might be imposed?
What is the averaging kernel? All the flux quantification assume uniform averaging kernel of 1.
>>>>> The results shown in the manuscript were based on Varon’s equation. I did derive the relationship between the Ueff and U from the simulation though. Since I only simulate one plume at a time, a representative relationship between the Ueff and U cannot be computed from our data.
line 175: How is the wind speed rotated?
>>>> The winds were rotated by (1) finding the second moment of inertia of the plume (2) rotating the HRRR wind direction to be coinciding with the x-axis.
Figure 2: How are the different rectangles selected? Custom/by eye for each source? Why only step outward in direction of plume flow?
>>>> We used the plume finder algorithm to identify the location of the plume. Then the upwind boundary was selected to be close to the upwind of the source. The box grows by 1 pixel in each step except for the upwind direction where it moves over by ¼ pixel.
Could some variability be addition for other sources/sites? It looks possible from the disconnected plume in 2b.
Why does the flux increase after 700m, and why would that be the only place for other sources?
>>>>> The increase after 700 m is due to interference from CH4 emissions nearby or under the original plume.
section 2.4: A lot of ‘we compared..’ but then not display of comparison or discussion of how that comparison is used. Leaves me wondering a bit what was learned/demonstrated with these comparisons. In places diffuse sources capability is discussed but not demonstrated with this paper.
line 264: Is methane air 10x10 or 20x20?
>>>> The boundaries between native pixels are separated by 5 meters across track and by 25 meters along track. The point spread function is roughly two and a half pixels wide. The image oversamples spatially. We project the images onto a grid of 10 by 10 m across and take the Gaussian filter into account for spatial oversampling. In an updated version of the retrievals that we didn’t use here, the spatial oversampling is accounted for in the gridding process, and we didn’t use the Gaussian filter. Similar results are obtained.
line 265: This would suggest the authors should then state that with this sample approach other sources within 1km impact ability to quantify emissions…
Please show the fully blinded 1:1 figure with slope/fitting as well as the after unblinding.
>>>>> We didn’t update the emissions after the unblinding. We only used the unblinding results to develop the decision tree to filter out potential low-quality emission estimates. In the revision, we can have two separate figures in the supplement.
It’s not really fair to call the release range small b/c lower than 1,000 kg/hr — 1,000 kg/hr is a very large emissions rate.
Fig: 3. how is the point at 250 flagged as below detection limit when the measured emission was over 200, higher than three other non-flagged points?
Are the flagged points included in the fit?
What are the reported error bars shown here? is it true that the meter error on the ground is comparable to the retrieval? in Sherwin et al it looks like much smaller meter uncertainty
>>>>>> For the 2021 experiment, the errors were comparable. In Fig 3, unlike the no-detect (red square, only one release), the flagged data (red circles) are the small plumes that have low signal-to-noise ratios decided by the decision tree. All points are included in the fit. The errors are 95% CI reported by the Stanford team and our estimates.
line 332: While invoking rapture is entertaining, I believe you mean rupture.
>>>>> We will fix this
line 356: need to add with appropriate testing and validation of the satellite…
>>>>> We will fix this
line 366: well, but it is computationally intensive and requires accurate input winds (HRRR) and accurate representation of surface topography and energy balance as well…
supplement:
S5: so what happens with negative values? since you use the average to subtract you will have a lot of negative values…
could you expand the discussion in the final two paragraphs? conceptually OLS would make the most sense. Unless metered errors were larger than anticipated and correlated with release rate the heteroskedasticity test is surprising.
>>>>> We already commented that the errors in the 2021 controlled releases are not negligible. Thus, York provides the MAXIMUM LIKELIHOOD ESTIMATOR
Further question on quantification related to the retrieval: What about topography changing the column air mass, and the air mass in the pbl. How does that come into play in these quantification measures and does that need to be addressed/discussed?
>>>>> The retrievals take account of the change in the topography since it’s multilayer retrieval. For these comparisons, there is no significant topography near the point of the releases.
Citation: https://doi.org/10.5194/egusphere-2023-822-AC2
-
AC2: 'Reply on RC2', Apisada Chulakadabba, 18 Jul 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-822', Anonymous Referee #1, 30 Jun 2023
Methane Point Source Quantification Using MethaneAIR: A New Airborne Imaging Spectrometer
General comments:
This valuable case study analyses data from the airborne imaging spectrometer MethaneAIR, the aircraft precursor instrument for MethaneSAT. In addition to a comprehensive explanation of the methane retrieval algorithms, backed up by a detailed supplement, it contains two interesting and important highlights: overflights of a controlled methane release experiment to strengthen the confidence in the method and to quantify its detection limit, and complementary atmospheric modelling to handle the issue of methane plumes being distorted by atmospheric turbulence. The paper is well written and presents helpful figures and tables. I have the following minor comments.
Specific Comments:
Introduction: it is good to hear about the positive development towards MethaneSAT, yet we should not forget that imaging spectroscopy has limitations. The intro would benefit from short explanations on how MethaneAIR/SAT will cope with variable or low surface albedo, the presence of aerosol in some of the plumes, the necessity to use CO2 as a retrieval proxy, and the necessary instrument design tradeoff between high spectral and high spatial measurement resolution. Also, it is OK to cite many similar remote sensing techniques, yet IPDA lidar is missing.
Section 2.1: you could/should mention that tropopause height variations influence the methane column due to less methane in the stratosphere. This may play a role at HIAPER flight altitudes and will certainly affect MethaneSAT data.
Figures 1 and 2 would benefit from displaying a km scale and an arrow indicating the approximate wind direction.
Line 140: … 1.5 standard deviations of the inflow values, above the median value of the inflow…
Line 141: … setting values below that value to NA.
Equations 1 and 2: for the sake of logic I would place Eq 2 before Eq 1.
Lines 158-160: are a repetition of lines 150-152; should be removed.
Line 219: “… a replacement for each synthetic observation.” I do not understand this. Please explain more precisely with what you replace the original inflow pixels.
Section 2.6.1: at which altitude (agl) was the methane released? The altitude plays a role in subsequent (turbulent) mixing and plume development. The LES release altitude should be selected accordingly.
Line 324: Typical emission rates were …
Figure 4: there is an inconsistency with Line 341 where you state that the correlation statistics are computed for emissions exceeding 500 kg/hr. The figure also shows emissions < 500 kg/hr. You should at least display those differently, or remove them from the figure. Also, I do not understand the red square: why does it represent “no detection” since it lies at about 700 kg/hr and therewith well above any detection limit?
Line 395: the reference for Chan Miller et al is missing.
Supplement, S1.1: I am afraid that the coefficients a and b depend on the ambient weather, or more precisely, on the turbulence conditions. Then, the relationship between U_eff and U_10m would lose its universality?
S1.1: what do you mean by overfitting? This term is not familiar to me.
S1.2, before equation S7: Combining equations S5 and S6… (not A5 and A6)
S1.3: I am afraid that the ratio method is only applicable under the condition that the wind velocity is correct?
Figures S1 and S2: add the Stanford release rates for reference. I would plot lines with symbols instead of bars. Maybe merge both figures for better comparison? What do you mean with “endpoints”?
Figures S5 and S6: maybe merge both figures for better comparison, plotting lines instead of bars?
Citation: https://doi.org/10.5194/egusphere-2023-822-RC1 -
AC1: 'Reply on RC1', Apisada Chulakadabba, 30 Jun 2023
Thank you for your constructive feedback. We agree that a concise discussion of the performance questions/limitations should be added to the paper. We appreciate your time and will carefully address all your comments in the revision.
Citation: https://doi.org/10.5194/egusphere-2023-822-AC1
-
AC1: 'Reply on RC1', Apisada Chulakadabba, 30 Jun 2023
-
RC2: 'Comment on egusphere-2023-822', Anonymous Referee #2, 10 Jul 2023
Review of “Methane Point Source Quantification Using MethaneAIR: A New Airborne Imaging Spectrometer” by Chulakadabba et al.
Chulakadabba et al. present a study demonstrating point source methane quantification from the MethaneAIR imaging spectrometer. The core of the work is evaluation by comparison with a controlled release experiment. Multiple point source quantification methods are evaluated, and the controlled release is used for evaluation and method development purposes. The paper is appropriately placed in AMT. Overall the paper is well written and represents important work in building confidence and robustness in MethaneAIR quantifications of methane point sources. I have a couple concerns outlined below. Once these are satisfactorily addressed publication would be warranted.
Conceptual/Larger concerns:
Missing information/reference on the retrieval: This paper is focused on the evaluation of quantification of point source emissions, which is a reasonable and appropriate scope. However, we are missing critical information on the retrieval product used to evaluate some elements of the manuscript. Chan Miller 2022 is cited repeatedly as a reference for the retrieval used, however there is not a journal or DOI listed with this reference. Extensive google scholar searching shows no such article. I do find two useful/relevant articles not cited, Conway et al., AMTD 2023 and Staebell et al., AMT 2021 that show the data processing and spectral calibration of MethaneAir respectively (those both should be cited here).
This leaves a rather problematic gap, as we have no information on the retrieval or how the XCH4 values are produced. Are these true total column values or just below the aircraft partial columns? What is the averaging kernel? This latter question is critically important as the quantification methods all do no discuss averaging kernel/sensitivity and implicitly assume that MethaneAir has a uniform averaging kernel, at least in the boundary layer. Is this true? Or should some correction be applied in the quantification to account for different sensitivity in the boundary layer (like in Wunch et al., GRL 2009)? I have further question on stability of retrieval and sensitivity to sampling – are all the flight legs getting the same background value and not have any aliasing that can manifest when combining the flight lines.
Slope fitting: I appreciate there is some nuanced discussion of York fitting versus OLS. For comparison with the controlled release, this is a scenario where OLS would strike me at first as most appropriate – the x-axis should be very well known and the y-axis have relatively large uncertainty. I’m surprised in Figure 3 that the error bars appear comparable on both axes – but I don’t know what those error bars represent and if they are the same. I’d like to see a little more concluding discussion on the slope methods. Right now it reads a bit as though the authors internally disagree on what slope method is correct, and presented both with a lean towards the York. But what should the community draw from this? To me, OLS would still be the correct approach for controlled releases provided the error is < 3 times that of the tested airborne/satellite method. In this example perhaps there is a special case where the metered release had higher error?
False positive/false negative rate?: Can anything be said about false positive or false negative rate from the data collected?
Care needed with extrapolation to MethaneSAT: At many times, particularly in the intro, there is an emphasis on the similarity and extensibility of this work to MethaneSAT. This needs to be presented with precision, as the work here demonstrates that these point source quantification approaches can work with appropriate imaging spectrometer data. So fair to say based on design specifications, this is extensible to MethaneSat, but we do not yet know what MethaneSat spectra will look like. (lines 41-43 for example). Also, it isn’t indicated in this paper, but I had the impression different detectors are used in MethaneSAT and MethaneAIR, which can lead to significant differences. If this is so, it should be acknowledged and the similarities between the two instruments should not be overstated.
Care needed with discussion of ‘independent’ winds, and extensibility of results/approach outside US where HRRR is not available: In the abstract (line 17) it is stated the independent winds are used with different approaches. In fact, HRRR is foundational in both the LES and the DI approaches. I recognize driving the LES with HRRR is not identical to using HRRR directly, but in the end the winds are based on the same input drivers. This should be clearly acknowledged. Further, the implications of this should be discussed as this is not an independent wind comparison. Also, HRRR is not available outside the US, so it should be discussed the impact that would/could happen for both LES and DI approaches with different/coarse resolution/less accurate wind fields. The impacts will be different for the two approaches. And the LES model will also be dependent on inputs for surface topography and energy balance which might be worse outside the US. There is a broader question embedded here in the scalability and extensibility of the LES approach.
Specific comments/line by line:
abstract line 17: states that LES and DI use different wind products – please correct this statement.
abstract concluding sentence: This work doesn’t demonstrate potential of “our instruments”. It demonstrates potential of methaneAIR and suggests that the quantification method should be transferrable to the satellite if the satellite meets design specs.
Around lines 35: Sentinel 2 , WorldView 3 were not designed for methane sensing but do it anyways. Also are missing PRISMA here. It is important to clarify whether the instrument was designed for methane use case or not. Actually aviris-ng was also not designed for methane either (though carbonmapper will be). One point being that sensors designed/dedicated to this purpose should outperform sensors reanalyzed for this.
line 41: “nearly identical spectroscopy”. Don’t the two instruments use different detectors? Don’t overstate the similarities.
line 43: Methanesat is designed to have the capabilities — but we do not yet know it will (fingers crossed).
line 47: Clarify, you are validating emissions estimates here, you are not validating concentrations in this paper.
line 65 typo “dein”
Line 74: Using HRRR is restricted to US. If wanting to do this elsewhere, what happens? Some discussion of input met field dependence/importance?
line 88-89: What is interference from nearby activity? Wouldn’t one say distinguishing amongst other activity important?
Table 1: Why start on RF04 - what happened with RF01-03?
What was successful flight fraction/instrument duty cycle?
Figure 1: Is this true total column XCH4 or partial column? What are gaps in Figure 1b? Maybe show faint flight lines so we can see the flight pattern.
lines 105-110: what is the impact of the gridding/smoothing choice on methane quantification? The gaussian filter is somewhat arbitrary and understanding the impacts of the smoothing on emissions estimates is important. What happens with no filter? What happens with filters that are more/less aggressive with different correlation lengths?
line 112-113: What are the implications of assuming source location was identified? This may be big real world challenge and more discussion in warranted.
2.3.1: Does the entire LES approach require knowledge of source site to run in its forward mode? How does this impact the feasibility to run for lots of unknown source locations and does this require first an algorithm to identify source location? Building off this and computational requirements, is this LES approach scalable?
line 147: Is the LES resolution coarser b/c of computation? Why even in this limited test/example cases could it not be run at comparable scale to the observations?
line 155: It is a bit unclear what you do for Ueff in the main text, is it the Varon equation or do you derive yourself from LES model? You state in the supplement you end up using Varon. Can you evaluate these assumptions/parameterization at all with the LES runs you have? What type of errors might be imposed?
What is the averaging kernel? All the flux quantification assume uniform averaging kernel of 1.
line 175: How is the wind speed rotated?
Figure 2: How are the different rectangles selected? Custom/by eye for each source? Why only step outward in direction of plume flow?
Could some variability be addition for other sources/sites? It looks possible from the disconnected plume in 2b.
Why does the flux increase after 700m, and why would that be the only place for other sources?
section 2.4: A lot of ‘we compared..’ but then not display of comparison or discussion of how that comparison is used. Leaves me wondering a bit what was learned/demonstrated with these comparisons.
In places diffuse sources capability is discussed but not really demonstrated with this paper.
line 264: Is methane air 10x10 or 20x20?
line 265: This would suggest the authors should then state that with this sample approach other sources within 1km impact ability to quantify emissions…
Please show the fully blinded 1:1 figure with slope/fitting as well as the after unblinding.
It’s not really fair to call the release range small b/c lower than 1,000 kg/hr — 1,000 kg/hr is a very large emissions rate.
Fig: 3. how is the point at 250 flagged as below detection limit when the measured emission was over 200, higher than three other non-flagged points?
Are the flagged points included in the fit?
What are the reported error bars shown here? is it true that the meter error on the ground is comparable to the retrieval? in Sherwin et al it looks like much smaller meter uncertainty
line 332: While invoking rapture is entertaining, I believe you mean rupture.
line 356: need to add with appropriate testing and validation of the satellite…
line 366: well, but it is computationally intensive and requires accurate input winds (HRRR) and accurate representation of surface topography and energy balance as well…
supplement:
S5: so what happens with negative values? since you use the average to subtract you will have a lot of negative values…
could you expand the discussion in the final two paragraphs? conceptually OLS would make the most sense. Unless metered errors were larger than anticipated and correlated with release rate the heteroskedasticity test is surprising.
Further question on quantification related to the retrieval: What about topography changing the column air mass, and the air mass in the pbl. How does that come into play in these quantification measures and does that need to be addressed/discussed?
Citation: https://doi.org/10.5194/egusphere-2023-822-RC2 -
AC2: 'Reply on RC2', Apisada Chulakadabba, 18 Jul 2023
Thank you so much R2 for your constructive comments. We appreciate your time and will carefully address all your comments in the revision. Below are some of the responses to your concerns.
Conceptual/Larger concerns:
Missing information/reference on the retrieval: This paper is focused on the evaluation of quantification of point source emissions, which is a reasonable and appropriate scope. However, we are missing critical information on the retrieval product used to evaluate some elements of the manuscript. Chan Miller 2022 is cited repeatedly as a reference for the retrieval used, however there is not a journal or DOI listed with this reference. Extensive google scholar searching shows no such article. I do find two useful/relevant articles not cited, Conway et al., AMTD 2023 and Staebell et al., AMT 2021 that show the data processing and spectral calibration of MethaneAir respectively (those both should be cited here).
This leaves a rather problematic gap, as we have no information on the retrieval or how the XCH4 values are produced. Are these true total column values or just below the aircraft partial columns? What is the averaging kernel? This latter question is critically important as the quantification methods all do no discuss averaging kernel/sensitivity and implicitly assume that MethaneAir has a uniform averaging kernel, at least in the boundary layer. Is this true? Or should some correction be applied in the quantification to account for different sensitivity in the boundary layer (like in Wunch et al., GRL 2009)? I have further question on stability of retrieval and sensitivity to sampling – are all the flight legs getting the same background value and not have any aliasing that can manifest when combining the flight lines.
>>>>> The Chan Miller 2023 should be submitted to AMT soon. I can share the current draft with R2. I agree that we missed the references to Conway et al., AMTD 2023, and Staebell et al., AMT 2021. We shall include them in the updated manuscript.
>>>>> It’s a true column comprising the paths from space to the ground and the ground to the aircraft (~ 12 km). However, since most of the excess methane is within the boundary layer (below the aircraft), we assume that the averaging kernel is applicable for the lowest kilometers. We will include a graph showing the averaging kernel, which is slightly larger near the surface than in the upper atmosphere. The detailed discussion should be in the Chan-Miller et al.
>>>> No, we didn’t have a uniform averaging kernel. We use multilayer retrievals that include the shape of the averaging kernels.
Slope fitting: I appreciate there is some nuanced discussion of York fitting versus OLS. For comparison with the controlled release, this is a scenario where OLS would strike me at first as most appropriate – the x-axis should be very well known and the y-axis have relatively large uncertainty. I’m surprised in Figure 3 that the error bars appear comparable on both axes – but I don’t know what those error bars represent and if they are the same. I’d like to see a little more concluding discussion on the slope methods. Right now it reads a bit as though the authors internally disagree on what slope method is correct, and presented both with a lean towards the York. But what should the community draw from this? To me, OLS would still be the correct approach for controlled releases provided the error is < 3 times that of the tested airborne/satellite method. In this example perhaps there is a special case where the metered release had higher error?
>>>>> Two main points that we want to convey.
- The community should be aware of the errors from the releases. In some cases (such as this controlled release experiment), errors are comparable. Thus OLS is inappropriate. We plan to add a second set of releases in 2022. The Stanford team had improved the instrumentation. In this new set of releases, their instrument errors are much smaller than our estimates.
- York Fit is more appropriate as the regression was designed to have errors for both the x-axis and y-axis variables. In the second case where the Stanford errors are close to being negligible, the YorkFit is still appropriate and gives similar answers to OLS.
False positive/false negative rate?: Can anything be said about false positive or false negative rate from the data collected?
>>>>> We can definitely say something about the rates, but we can’t generalize them for future cases. This is the limitation of any controlled release experiment.
Care needed with extrapolation to MethaneSAT: At many times, particularly in the intro, there is an emphasis on the similarity and extensibility of this work to MethaneSAT. This needs to be presented with precision, as the work here demonstrates that these point source quantification approaches can work with appropriate imaging spectrometer data. So fair to say based on design specifications, this is extensible to MethaneSat, but we do not yet know what MethaneSat spectra will look like. (lines 41-43 for example). Also, it isn’t indicated in this paper, but I had the impression different detectors are used in MethaneSAT and MethaneAIR, which can lead to significant differences. If this is so, it should be acknowledged and the similarities between the two instruments should not be overstated.
>>>>>>> Correct. MethaneSAT and MethaneAIR have two different detectors. We will add comments regarding the effects on the manuscript.
Care needed with discussion of ‘independent’ winds, and extensibility of results/approach outside US where HRRR is not available: In the abstract (line 17) it is stated the independent winds are used with different approaches. In fact, HRRR is foundational in both the LES and the DI approaches. I recognize driving the LES with HRRR is not identical to using HRRR directly, but in the end the winds are based on the same input drivers. This should be clearly acknowledged. Further, the implications of this should be discussed as this is not an independent wind comparison. Also, HRRR is not available outside the US, so it should be discussed the impact that would/could happen for both LES and DI approaches with different/coarse resolution/less accurate wind fields. The impacts will be different for the two approaches. And the LES model will also be dependent on inputs for surface topography and energy balance which might be worse outside the US. There is a broader question embedded here in the scalability and extensibility of the LES approach.
>>>>> Agree that we should acknowledge the fact that DI and mIME used similar HRRR driver winds. There is a broader question embedded here in the scalability and extensibility of the LES approach. We will also acknowledge that the LES might not be able to scale globally.
Specific comments/line by line:
abstract line 17: states that LES and DI use different wind products – please correct this statement.
>>>> Will do.
abstract concluding sentence: This work doesn’t demonstrate potential of “our instruments”. It demonstrates potential of methaneAIR and suggests that the quantification method should be transferrable to the satellite if the satellite meets design specs.
>>>> Will fix this.
Around lines 35: Sentinel 2 , WorldView 3 were not designed for methane sensing but do it anyways. Also are missing PRISMA here. It is important to clarify whether the instrument was designed for methane use case or not. Actually aviris-ng was also not designed for methane either (though carbonmapper will be). One point being that sensors designed/dedicated to this purpose should outperform sensors reanalyzed for this.
>>>> We can include PRISMA here. We will also make sure to mention whether the instrument was designed for methane use.
line 41: “nearly identical spectroscopy”. Don’t the two instruments use different detectors? Don’t overstate the similarities.
>>>> We will replace the word “nearly” with “very similar.”
line 43: Methanesat is designed to have the capabilities — but we do not yet know it will (fingers crossed).
>>>> We include this change.
line 47: Clarify, you are validating emissions estimates here, you are not validating concentrations in this paper.
>>>> We will fix this.
line 65 typo “dein”
>>>> We will fix this.
Line 74: Using HRRR is restricted to US. If wanting to do this elsewhere, what happens? Some discussion of input met field dependence/importance?
>>>>> We tried estimating the emissions using different wind products. It’s hard to generalize which product is better or worse than the others.
line 88-89: What is interference from nearby activity? Wouldn’t one say distinguishing amongst other activity important?
>>>>> In this case, interference from nearby sources means methane enhancement from nearby sources that can’t be separated from the source of interest.
Table 1: Why start on RF04 - what happened with RF01-03?
What was successful flight fraction/instrument duty cycle?
>>>>> RF04 is simply an identification of the research flight of the MethaneAIR campaign series. The first three research flights (RF01 - RF03 ) were successful engineering flights. Our main purposes weren’t for methane data collection, but rather the making sure that our instrument was working properly. We didn’t report data from the engineering flights.
Figure 1: Is this true total column XCH4 or partial column? What are gaps in Figure 1b? Maybe show faint flight lines so we can see the flight pattern.
>>>>> Those gaps are from filtered shadows, clouds, and water basins. We will include the flight tracks in the supplement.
lines 105-110: what is the impact of the gridding/smoothing choice on methane quantification? The gaussian filter is somewhat arbitrary and understanding the impacts of the smoothing on emissions estimates is important. What happens with no filter? What happens with filters that are more/less aggressive with different correlation lengths?
>>>>>> We have tried various smoothing/gridding options. They don’t really matter much. If we don’t do smoothing, we would not account for the spatial oversampling. We would get higher noise levels and harder to detect small emissions. We chose the Gaussian filter because it conserves mass and is a standard way to denoise an image.
line 112-113: What are the implications of assuming source location was identified? This may be big real world challenge and more discussion in warranted.
>>>>> By assuming that source locations were identified, we have a lower chance of having false positives. Identifying source locations is challenging for various reasons. In addition, our algorithm for plume detection is still developing. We want to be explicit that our plume detection work is not fully automated yet.
2.3.1: Does the entire LES approach require knowledge of source site to run in its forward mode? How does this impact the feasibility to run for lots of unknown source locations and does this require first an algorithm to identify source location? Building off this and computational requirements, is this LES approach scalable?
>>>>>> We need to specify the LES domain and ensure that the source is within that domain (10 km x 10 km at the moment, could be larger). We have a plume finder algorithm based on the DI approach to identify potential sources. Once the sources are identified by the plume finder algorithm, we can run LES for important targets. With the current setup, it only takes less than 4 hours to complete the run. The computational cost is much cheaper compared to the L2 data processing cost. Regarding scalability, once we automatically cross-check the DI-identified sources and the infrastructure inventories, we should be able to include the LES and the mIME in the pipeline.
line 147: Is the LES resolution coarser b/c of computation? Why even in this limited test/example cases could it not be run at comparable scale to the observations?
>>>> Correct, the compute time would be at least 25 times longer than the current setup. We did try running LES at ~ 12 m by 12 m resolution. The plumes look similar to the 111.111 m by 111.111 m resolution. So, we chose to use 111.11 m by 111.11 m to be consistent with the resolution we plan to use operationally. Note, this version of LES uses a closure approach in the boundary layer that renders it to be impossible to resolve plumes under 50 m scale.
line 155: It is a bit unclear what you do for Ueff in the main text, is it the Varon equation or do you derive yourself from LES model? You state in the supplement you end up using Varon. Can you evaluate these assumptions/parameterization at all with the LES runs you have? What type of errors might be imposed?
What is the averaging kernel? All the flux quantification assume uniform averaging kernel of 1.
>>>>> The results shown in the manuscript were based on Varon’s equation. I did derive the relationship between the Ueff and U from the simulation though. Since I only simulate one plume at a time, a representative relationship between the Ueff and U cannot be computed from our data.
line 175: How is the wind speed rotated?
>>>> The winds were rotated by (1) finding the second moment of inertia of the plume (2) rotating the HRRR wind direction to be coinciding with the x-axis.
Figure 2: How are the different rectangles selected? Custom/by eye for each source? Why only step outward in direction of plume flow?
>>>> We used the plume finder algorithm to identify the location of the plume. Then the upwind boundary was selected to be close to the upwind of the source. The box grows by 1 pixel in each step except for the upwind direction where it moves over by ¼ pixel.
Could some variability be addition for other sources/sites? It looks possible from the disconnected plume in 2b.
Why does the flux increase after 700m, and why would that be the only place for other sources?
>>>>> The increase after 700 m is due to interference from CH4 emissions nearby or under the original plume.
section 2.4: A lot of ‘we compared..’ but then not display of comparison or discussion of how that comparison is used. Leaves me wondering a bit what was learned/demonstrated with these comparisons. In places diffuse sources capability is discussed but not demonstrated with this paper.
line 264: Is methane air 10x10 or 20x20?
>>>> The boundaries between native pixels are separated by 5 meters across track and by 25 meters along track. The point spread function is roughly two and a half pixels wide. The image oversamples spatially. We project the images onto a grid of 10 by 10 m across and take the Gaussian filter into account for spatial oversampling. In an updated version of the retrievals that we didn’t use here, the spatial oversampling is accounted for in the gridding process, and we didn’t use the Gaussian filter. Similar results are obtained.
line 265: This would suggest the authors should then state that with this sample approach other sources within 1km impact ability to quantify emissions…
Please show the fully blinded 1:1 figure with slope/fitting as well as the after unblinding.
>>>>> We didn’t update the emissions after the unblinding. We only used the unblinding results to develop the decision tree to filter out potential low-quality emission estimates. In the revision, we can have two separate figures in the supplement.
It’s not really fair to call the release range small b/c lower than 1,000 kg/hr — 1,000 kg/hr is a very large emissions rate.
Fig: 3. how is the point at 250 flagged as below detection limit when the measured emission was over 200, higher than three other non-flagged points?
Are the flagged points included in the fit?
What are the reported error bars shown here? is it true that the meter error on the ground is comparable to the retrieval? in Sherwin et al it looks like much smaller meter uncertainty
>>>>>> For the 2021 experiment, the errors were comparable. In Fig 3, unlike the no-detect (red square, only one release), the flagged data (red circles) are the small plumes that have low signal-to-noise ratios decided by the decision tree. All points are included in the fit. The errors are 95% CI reported by the Stanford team and our estimates.
line 332: While invoking rapture is entertaining, I believe you mean rupture.
>>>>> We will fix this
line 356: need to add with appropriate testing and validation of the satellite…
>>>>> We will fix this
line 366: well, but it is computationally intensive and requires accurate input winds (HRRR) and accurate representation of surface topography and energy balance as well…
supplement:
S5: so what happens with negative values? since you use the average to subtract you will have a lot of negative values…
could you expand the discussion in the final two paragraphs? conceptually OLS would make the most sense. Unless metered errors were larger than anticipated and correlated with release rate the heteroskedasticity test is surprising.
>>>>> We already commented that the errors in the 2021 controlled releases are not negligible. Thus, York provides the MAXIMUM LIKELIHOOD ESTIMATOR
Further question on quantification related to the retrieval: What about topography changing the column air mass, and the air mass in the pbl. How does that come into play in these quantification measures and does that need to be addressed/discussed?
>>>>> The retrievals take account of the change in the topography since it’s multilayer retrieval. For these comparisons, there is no significant topography near the point of the releases.
Citation: https://doi.org/10.5194/egusphere-2023-822-AC2
-
AC2: 'Reply on RC2', Apisada Chulakadabba, 18 Jul 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
827 | 433 | 27 | 1,287 | 58 | 18 | 19 |
- HTML: 827
- PDF: 433
- XML: 27
- Total: 1,287
- Supplement: 58
- BibTeX: 18
- EndNote: 19
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Apisada Chulakadabba
Maryann Sargent
Thomas Lauvaux
Joshua S. Benmergui
Jonathan E. Franklin
Christopher Chan Miller
Jonas S. Wilzewski
Sébastien Roche
Eamon Conway
Amir H. Souri
Bingkun Luo
Jacob Hawthrone
Jenna Samra
Bruce C. Daube
Xiong Liu
Kelly V. Chance
Ritesh Gautam
Mark Omara
Jeff S. Rutherford
Evan D. Sherwin
Adam Brandt
Steven C. Wofsy
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1309 KB) - Metadata XML
-
Supplement
(1851 KB) - BibTeX
- EndNote
- Final revised paper