Volatile organic compound fluxes in the San Joaquin Valley – spatial distribution, source attribution, and inventory comparison
Abstract. The San Joaquin Valley is an agricultural region in California that suffers from poor air quality. Since traffic emissions are decreasing, other sources of volatile organic compounds (VOCs) are gaining importance in the formation of secondary air pollutants. Using airborne eddy covariance, we conducted direct, spatially resolved flux observations of a wide range of VOCs in the San Joaquin Valley during June 2021 at 23–36 °C. Through landcover-informed footprint disaggregation, we were able to attribute emissions to sources and identify tracers for distinct source types. VOC mass fluxes were dominated by alcohols, mainly from dairy farms, while oak isoprene and citrus monoterpenes were important sources of reactivity. Comparisons with two commonly used inventories showed that isoprene emissions in the croplands were overestimated, while dairy and highway VOC emissions were generally underestimated in the inventories, and important citrus and biofuel VOC point sources were missing from the inventories. This study thus presents unprecedented insights into the VOC sources in an intensive agricultural region and provides much needed information for the improvement of inventories, air quality predictions and regulations.
Eva Y. Pfannerstill et al.
Status: open (until 14 Jun 2023)
- RC1: 'Comment on egusphere-2023-723', Anonymous Referee #2, 23 May 2023 reply
Eva Y. Pfannerstill et al.
Meteorological and VOC flux data https://csl.noaa.gov/projects/sunvex/
Citrus processing and ethanol manufacturing locations southern San Joaquin Valley https://arcg.is/1iCnXu0
Model code and software
VOC airborne eddy covariance code https://github.com/tilviola/Airborne_VOC_Flux
Footprint model code converted from R to Matlab (original: Natascha Kljun, Stefan Metzger) https://github.com/qdzhu/FLUX/blob/main/calc_footprint_KL04.m
Eva Y. Pfannerstill et al.
Viewed (geographical distribution)
In "Volatile organic compound fluxes in the San Joaquin Valley – spatial distribution, source attribution, and inventory comparison", authors Pfannerstill et al. report on the results of airborne (emission) flux measurements of VOCs in the San Joaquin Valley in California. The results are compared to emissions inventories and attributed to source types, revealing important mismatches, in particular underestimated or missing point sources.
Overall, the manuscript is well structured and clearly written. It is to the point and very pleasant to read. The methodology is sophisticated and makes excellent use of a (probably) unique observational dataset that appears carefully acquired. The results are generally solid, and discussions and conclusions well supported. A particular highlight is the authors' careful work in attributing airborne flux measurements to respective emission sources via flux footprint disaggregation. Also, various exiting inventories were considered and specific additions to inventories proposed.
All in all, that makes for interesting results, which will be very useful -- for emissions modeling and air quality studies (especially for the San Joaquin Valley), as well as flux measurement endeavors. In conclusion, I warmly recommend this study for publication in Atmospheric Chemistry & Physics.
I do have a few comments/questions, which mostly relate to occasional wishes for a bit more details or clarifications. Almost all of them concern the methods descriptions and analysis approaches. I list them below, starting with 4 somewhat bigger comments, followed by a list of minor and technical comments. However, I expect that addressing these will only require minor revisions or clarifications.
L202-203: "A running mean of 2 km was applied to the 10-Hz fluxes to eliminate turbulence effects causing artificial emission and deposition, and sub-sampled to 200 m."
I am not completely following here. What did the sub-sampling procedure consist of? What is the nature of the artifacts that were sought to be avoided? (Vertical aircraft motion?) And how did those artifacts manifest and how was it judged (or decided) that a 2-km running averaging (plus possibly that subsampling) would successfully deal with them?
It could be worth to also illustrate that process in a supplement figure.
From reading Wolfe et al. (2018; in particular Section 3.4.2 therein), I have come to the understanding that the methods referenced and used here for estimating/calculating flux errors (in particular the random errors) have been developed for "traditional" eddy covariance calculations using ensemble averages, and may not be directly applicable to all CWT fluxes. Also, it does not become clear if the various uncertainties were determined for legs as a whole, or individual CWT fluxes, or some combination thereof.
Could the authors clarify that and comment on the applicability of the methods to their procedure given concerns/proposed methods by Wolfe et al.?
Section 2.5.7, Fig. S5
Are the authors able to speculate what causes the KL15 footprints being biased way too big/far?
Indeed, I made similar experiences (unpublished). On the other hand, Hannun et al. (2020) appear to have gotten reasonable footprints for their airborne fluxes using KL15. (And they also explicitly write that model is "applicable to many turbulence regimes and measurement heights".) Unfortunately, I have not had time yet to figure out the details in that paper. But I am curious about the applicability of KL15 in general, and about possibly easily occurring oversights or misunderstandings when applying that parametrization. Especially for airborne datasets.
Fig. 3 (and related results; also Section 2.3.3):
Several signals required corrections for fragmentation and/or hydration in the instrument (e.g., 2.3.3). It appears likely then that other signals are subject to similar issues, but which are simply not that well understood. Could such insufficiently understood induced reactions produce biases, in particular in light of the reported overall results (as presented for instance in Fig. 3)? And if so, what kind of biases should be considered? Or are the reported compounds actually well understood in terms of their response to PTR-MS?
L65-67: Were all "4-6 different altitudes" within the PBL flown inside the 300-400 m AGL range, or does that range only refer to the lowest racetracks? (In the latter case, how high did the stacks reach?)
L105+: Maybe the authors could briefly put the instrument parameters (in particular the E/N) into perspective for the reader less familiar with the instrumentation. E.g., what kind of instrument behavior is desired/expected/achieved from an E/N of 130 Td, or a potential gradient of 590 V?
L131: Regarding the sensitivity estimation "for all m/z without a corresponding gas standard", could the authors provide a reference for more details on that procedure?
I suggest to consider also the recent findings by Vermeuel et al. (https://doi.org/10.5194/acp-23-4123-2023) regarding corrections they needed for deriving isoprene based on the C5H9+ signal. I am quite convinced of the correction method applied here, but worth double-checking.
L172: Unclear to me how that re-speciation was done, and if it was a result of this study or based on something else. Could the authors provide a little more detail, or a reference either to another study or to the section where that is discussed in more detail here?
L200: I assume so, but was the scale bias correction applied (e.g., Liu et al., 2007)?
L235-236: There is reference to references in Table S1. Where are these references? I fail to find them.
L238+: It would be very interesting to see an example or summary of that verification. Also, which monoterpene oxidation products and ratios were used and expected? (Probably best in the supplement.)
L260: Should probably specify that the "intercept" means the x-intercept.
2.5.4 + Fig. S2 + Table S1:
I probably get something wrong here, indicating that the section could be clarified somewhere. E.g., for benzene, slope is -0.0741 and intercept is 0.0838, so C is -0.884 and the correction factor should be (Eq. 4) between 1.21 (z/zi=0.2) and 2.4 (z/zi=0.66). But Table S1 lists a factor of 0.013 (though with an uncertainty of 1.24). Was the physical flux divergence simply quite variable between flights? (Or uncertain?) And Fig. S2 only showing the data for one flight?) ... Oh, Section 2.5.6 actually indicates only moderate variability... I remain confused.
Fig. 1: The footprints are presumably a campaign-long average? Not clear.
I am worried that the (important) geographic labels will become unreadable in the final version.
What do the dashed lines show?
Liu, Y. J., San Liang, X., and Weisberg, R. H.: Rectification of the
Bias in the Wavelet Power Spectrum, J. Atmos. Ocean. Tech., 24,
Hannun R. A., Wolfe G. M., Kawa S. R., Hanisco T. F., Newman P. A., Alfieri J. G., Barrick J., Clark K. L., DiGangi J. P., Diskin G. S., King J., Kustas W. P., Mitra B., Noormets A., Nowak J. B., Thornhill K. L., and Vargas R.: Spatial heterogeneity in CO2, CH4, and energy fluxes: insights from airborne eddy covariance measurements over the Mid-Atlantic region, Environ. Res. Lett., 15, 035008, 2020.
Vermeuel, M. P., Novak, G. A., Kilgour, D. B., Claflin, M. S., Lerner, B. M., Trowbridge, A. M., Thom, J., Cleary, P. A., Desai, A. R., and Bertram, T. H.: Observations of biogenic volatile organic compounds over a mixed temperate forest during the summer to autumn transition, Atmos. Chem. Phys., 23, 4123–4148, https://doi.org/10.5194/acp-23-4123-2023, 2023.
Wolfe, G. M., Kawa, S. R., Hanisco, T. F., Hannun, R. A., Newman, P. A., Swanson, A., Bailey, S., Barrick, J., Thornhill, K. L., Diskin, G., DiGangi, J., Nowak, J. B., Sorenson, C., Bland, G., Yungel, J. K., and Swenson, C. A.: The NASA Carbon Airborne Flux Experiment (CARAFE): instrumentation and methodology, Atmos. Meas. Tech., 11, 1757–1776, https://doi.org/10.5194/amt-11-1757-2018, 2018.