the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Biomass Burning Emissions Analysis Based on MODIS AOD and AeroCom Multi-Model Simulations
Abstract. We assessed the performance of 11 AeroCom models in simulating biomass burning (BB) smoke aerosol optical depth (AOD) in the vicinity of fires over 13 regions globally. By comparing multi-model outputs and satellite observations, we aim to: (1) assess the factors affecting model-simulated, BB AOD performance using a common emissions inventory, (2) identify regions where the emission inventory might underestimate or overestimate smoke sources, and (3) identify anomalies that might point to model-specific smoke emission, dispersion, or removal, issues. Using satellite-derived AOD snapshots to constrain source strength works best where BB smoke from active sources dominates background aerosol, such as in boreal forest regions and over South America and southern-hemisphere Africa. The comparison is poor where the total AOD is low, as in many agricultural burning areas or where background, non-BB AOD is high, such as parts of India and China. Many inter-model BB AOD differences can be traced to differences in model-assumed values for the mass ratio of organic aerosol to organic carbon, the BB aerosol mass extinction efficiency, and the aerosol loss-rate. The results point to the need for increased numbers of available BB cases for study in some regions, and especially to the need for more extensive, regional-to-global-scale measurements of aerosol loss rates and of detailed microphysical and optical properties; this would better constrain models and help distinguish BB from other aerosols in satellite retrievals. More generally, there is the need for additional efforts at constraining aerosol source strength and other model attributes with multi-platform observations.
- Preprint
(3477 KB) - Metadata XML
-
Supplement
(479 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-1487', Anonymous Referee #1, 21 Jun 2024
Major:
- GFED 3.1 is very outdated and has been shown to underestimate fires, particularly small ones. I understand that that’s what was used in the multimodel study, but, at the minimum, an uncertainty analysis showing the difference between that and updated fire inventories should be included and a discussion of the limitations of GFED3.1.
- The models compared are also quite old. Specifically GEOS-Chem v9-02 is super outdated. I understand that that was what was used in the intercomparison, but the science is so outdated as to raise the question - is this comparison useful now? What is the added value of a paper like this when a lot of these fire comparisons have already been done and this study is using outdated models and fire inventories? Most, if not all, of these takeaways have been shown in other work (papers by Tina Liu, Carter et al. (2020), Pan et al. like the other cite, etc.)
- More of a discussion of MODIS AOD uncertainties would also be useful.
Minor:
- The intro should make mention of Carter et al. (2020) that looked at fire-influenced AOD in North America.
- Figure 6 would make more sense if BB was red across both MODIS and model instead of current color scheme.
Citation: https://doi.org/10.5194/egusphere-2024-1487-RC1 -
RC2: 'Comment on egusphere-2024-1487', Anonymous Referee #2, 05 Jul 2024
Review of “Biomass Burning Emissions Analysis Based on MODIS AOD and
AeroCom Multi-Model Simulations”, Petrenko et al., 2024, submitted to EGUsphere,
I’m rating this paper as “minor revisions”, after some waffling on my part between “minor” and “major”. I was concerned that the authors’ work did not investigate the variation in model AOD predictions resulting from the algorithms used in each model to calculate AOD and the aerosol mixing state properties – there’s some previous work where those assumptions have been shown to contribute up to a factor of 3.5 variation in the resulting AOD estimate from the model. This is apart from the sources of variability investigated by the authors. My concern there was that the uncertainty in the conclusion that some of the GFED smoke emissions values for some subregions were on the low side might be due to underestimates in the AOD calculation itself (cf. Curci et al., 2015, Atm. Env.). However, once I reached Table 2 in reading through the paper, I could see that the ratio values there suggested underestimates in model AOD that were outside the range of what might be expected via different aerosol mixing state assumptions – and I liked the authors approach of caveating their conclusions by the confidence in the analysis into 4 groups (A,B,C,D).
I’m recommending minor revisions with the caveat that the authors add some discussion of the aerosol mixing state and AOD calculation methodology impact on the results, including that the analysis of Table 2 and subsequent figures shows that the negative bias AODs calculated by the models in this work are sufficiently large that AOD calculation methodology can’t explain the results – and hence bolstering the argument that the GFED 3 emissions are likely to be low. If would also be good to update the references in the introduction (I’ve given some examples in my comments below) – though I recognize that a large scale model intercomparison such as this by default will end up having out of date input information and is limited by same. For example, GFED 5 came out earlier this year – so its worth a quick check through the literature and updating the introduction to include some of the more recent developments.
My detailed comments follow, identified by line number. Comments starting with a an asterisk are more serious concerns – which are addressable via more discussion in the text.
Line 35, 40 Abstract: There's also the question of how well the model approach for estimating AOD works, which has appeared elsewhere in the literature. That is, one source of variation is the aerosol mixing state and manner in which BB AOD estimation is carried out, rather than overestimates or underestimates of the amount of smoke. See Curci et al, 2015, Atm. Env., 115, pp 541-552 (https://www.sciencedirect.com/science/article/pii/S1352231014007018) and note that methods of estimating AOD from model values are generally biased low at 440nm, distributed about the observed values at 870 nm. One question here is the extent to which the authors results imply that the smoke emissions are biased low versus the methodologies used to estimate AOD in the models are not correct (or not making use of the right information for key parameters such as complex refractive index values)? Reading through the paper, however, I can see that the overall biases the authors are seeing are larger than the differences that might be associated with AOD calculation methodology – I think some discussion of the Curci paper in a few places in the manuscript would support the argument that the underlying smoke emissions estimates are too low in some regions.
Lines 55 – 64. This paragraph is missing some of the more recent papers (sometimes by the same authors) and consequently comes across as being a bit out of date. There’s been a lot of work since the papers referenced here. Some of these may appear later in the paper, but some examples:
Anderson et al (GMDD, 2024): https://gmd.copernicus.org/preprints/gmd-2024-31/; a model specifically designed for real-time forecasting which uses hotspot detection and statistics of burned area per hotspot rather than FRP).
Chen et al 2019: GMD 12, 3283-3310, 2019 (same system as above, for North America).
Chen et al., 2023: most recent GFED 5 paper, Earth Science Data Discussions, 1-52, 2023.
Van der Werf et al., 2017: most recent GFED paper (I think - GFED4.1); Earth System Science Data 9(2), 697-720, 2017.
Wiedinmyer et al., 2023: FINN update, EGUsphere, 1-43, 2023.
The authors should also be referencing the previous intercomparison of biomass burning by Pan et al 2020: Atm. Chem. Phys., 20, 969-994, 2020.
… all if they haven’t already done so.
** Line 79. The authors should include in their list of things that need to be perfect, "The manner in which models calculate AOD from predicted aerosol fields, and the assumptions made in those calculations regarding aerosol properties such as mixing state and optical properties."
There really needs to be some discussion of how the variability in AOD calculations from model output depends on the assumptions regarding the aerosol mixing state, optical properties, etc, at this point.
The authors have (largely) missed an important issue here - that there is uncertainty in the AOD calculation itself, quite separate from the models particulate matter estimates. Issues such as whether the particles' mixing state is a homogeneous versus core-shell can influence the resulting AOD (Curci et al, Atm Env., 115,541-552, 2015). While models have been underestimating AOD relative to satellite retrievals for some time (e.g. see the Curci et al 2015 paper noted above), the extent to which in the current paper this is the result of underestimates in the models' emissions source term, or in the model outputs and aerosol property assumptions used in the models, is less clear. The authors need to acknowledge these issues, and discuss the extent to which they may influence the authors' conclusions regarding emissions source strength. For example, Curci et al (2015) showed that the aerosol phys/chem properties could result in a factor of 2 variation in the calculated AOD at 440nm and 870 nm over the EU, with some calculation methods/mixing state assumptions having much lower biases than others. The point here is that the extent of this bias will have a strong influence on the inferred emissions strength needed to improve model results. Later on, though, I note that the range of values in the authors’ Table 2 is sufficiently outside of the range of variation that said calculation methodologies can’t be the sole contributor to the variability. Mentioning that range from Curci et al in the context of Table 2 would help bolster the author’s suggestion that emitted smoke levels being biased low is at least part of the cause of the low AOD predictions.
Line 96: What methodology/optical property assumption was used in Petrenko et al 2012 to generate AOD, and where does that methodology fall with regards to Curci et al.'s comparisons?
Line 127: Minor wording clarification needed here, since the reader has not got that far into the paper: I assume "multi-model" here is referring to the overall modelling framework (global or regional air-quality models) rather than the portion of these models used to estimate AOD?
Line 131: Initialize the models' injection heights only? Or is the mass of emissions also initialized by the satellite data?
Line 134: is the Pan et al manuscript still “in prep” or has it been submitted? 2 years since 2022 now. Should this date be 2024?
Line 138: GFED3.1 is used as the inventory; for those who are not familiar with GFED, does it also provide injection heights, and if not, how are they calculated by each model? This can be a key issue for AOD: higher injection means faster dispersion and presumably lower downwind AOD values.
**Line 159:, Table S2 is missing important information. Table S2 doesn't mention how the aerosol mixing state has been incorporated into each model's AOD calculation, and needs to do so. For example, are all using Mie scattering; are they core-shell approaches or homogeneous mixtures? Is a black carbon core assumed, etc... Please include another column equivalent to Table 3 of Curci et al 2015 in Table S2, so the reader can see exactly what has been done.
Line 166: The differences in injection height could also be a factor in AOD variation. Have the authors quantified this impact or do other differences control model AOD performance?
- What will be the impact of the prescribed injection heights on AOD and hence ideas regarding emissions source strength?
- How much variation was present in the different models' estimates of PBL height, and how does that impact AOD calculations?
- I'm rather surprised that the models not using the prescribed "inject into the PBL" did not attempt to calculate the height from the model atmospheric thermodynamics (cf. Anderson et al, 2024 reference noted above).
Line 181: I’m surprised (perturbed) that the setup for the study did not require common emissions to be used as a constraint on the models, as has been done in regional air-quality model comparisons such as the Air Quality Model Evaluation International Initiative series of papers, Galmarini et al and other special issue papers in Atm. Env., ACP, etc.. Or do the authors here mean that that an inventory generated from multiple sources was used in common by all participating models?
Line 201-204: The variation in chosen OA/OC ratios has been shown by the authors to have a large impact on primary OA emissions (a factor of 2.6/1.4 variation) – creating another source of variability. Can the authors provide some justification regarding why this was not harmonized across models in order to remove this source of uncertainty (or, equivalently, a run could have been done with a common OA/OC ratio used to demonstrate the relative impact)?
Line 216: the authors need to provide some more basic information (a few sentences) about the MODIS AOD data, and how it is used. For example, the wavelength of the retrieved AOD, the resolution of the satellite pixels, whether individual AOD pixel values are matched to individual model values at the same time or averages is not clear. With regards to the last of these: I wasn’t certain whether the background removal procedure makes use of AOD upwind of the fire locations at the time of the fire to determine background AOD levels for differencing, or whether long-term averages were used. Please clarify; different places in the paper seem to imply different ways the background values were constructed.
Line 255: I’m concerned about how model resolution relative to the size of the box will affect the comparison. How were the box sizes determined, again? For example, the highest resolution models will presumably be sampling more grid cells within the prescribed box to generate the average. These models presumably will also do a better job of resolving the maximum value within the plume. However, this advantage will be lost if the averaging box is large in size. I suggest the authors also add the model maximum within the sampling box as well as the average into their comparisons: perhaps the high resolution models have higher concentrations and hence better AODs locally... but this improvement may not be seen in the averages constructed by the authors.
Line 268: empty bullet point in text. Suggest adding a “box maximum AOD” be added here.
Line 273: The use of averages over a common box size also has inherent risks of smoothing out the model values. What is the size of the satellite AOD retrieval pixel relative to these boxes, and the size of the model grid cells relative to these boxes?
Line 279, “generally lower BB AOD than the MODIS estimates”: Which might be expected, if AOD is e.g. 440 to 550 nm, based on Curci et al, 2015, and depending on the assumptions used in calculating model AOD. This does not necessarily imply that the emissions going into the models are low, at this stage in the paper; it could be the result of an inaccurate method for calculating AOD. Or some of the other assumptions, such as the model emissions being (most models) limited to within the boundary layer - which is not likely to be a limiting factor for large fires.
**Line 300, Table 2. Up to this point I was thinking that the absence of discussion on aerosol optical depth calculation methodology and assumptions would warrant a “major revisions” for the paper, this Table convinced me it should be minor. There needs to be a bit more discussion of the values shown here relative to variations in Curci et al., 2015. What I can see here is that the ratios show negative biases in excess of the range of values seen in Curci et al, where AOD calculation methodologies and aerosol mixing state properties were compared. Which in turn helps the authors’ implication that the analysis is suggesting that some biomass burning emissions levels from GFED are low. The generic model negative biases in some regions (CEAM, TENA, SEAS, BOAS_W, CEAS-E, CEAS-W) are larger than what would be expected associated with choices of model mixing state and the manner in which AOD is calculated. That is, Curci et al showed a generic negative bias factor of 1 to 3.5 in model-generated AOD calculation (their Figure 4) - while the regions in the current authors’ work have even further negative biases. The small numbers thus can't be attributed solely to the model assumptions on aerosol mixing state and how they calculate AOD. i.e. the underestimates are outside of the range that might reasonably be associated with AOD calculation methodology, and hence can be attributed to other causes (e.g. emissions magnitude, etc).
Similarly, given that Curci et al showed a general underestimate of AOD at shorter wavelengths, the values higher than 1 in this table should be considered "outside of the range that might be expected associated with model AOD calculation variability" and hence may be due to other causes, such as emissions overestimates. Its worth bringing this up in the discussion, since it helps bolster the argument that the analysis is showing an underestimate in smoke emissions magnitude.
**Bottom of Table 2: The authors should add another row with the model standard deviation across regions. i.e. some models have an average that appears to be good (close to 1.0), but they also have a large variation between regions. Others may have a consistent relative bias but less variability. Standard deviation would help the reader distinguish the models with "good average performance" from those with "good average performance due to balancing positive and negative biases in some regions".
Suggest the authors add this point to their numbered points in section 3.2. Also, line 348: can a “diversity” row or column be added to Table 2? That would address that concern.
Following line 350: Text needs to include another paragraph with interpretation of Table 3. What does this Table tell the reader about relative source strength of the emissions (are they biased high or low), for example, and the authors level of confidence in that inference? How the final column was calculated from the numbers in the Table or other information was not clear; needs a better description in the text.
Table 3: Table caption needs to include the color coding mentioned in the text. Not mentioned in the text was the blue values (unless I missed it, apologies if so) - presumably the most uncertain? Also, there are two different shades of green shown, not described in the text by the time the reader gets to the table. Was the lighter green intended to represent cases where one of the two sets of values used to get the ratio was “yellow”, the other “green”, or something else entirely. The color coding needs to be better described in the Table caption.
Table 3: I’m not sure how the final column was calculated, please explain. I was trying perturbations of the numbers appearing in the other columns, couldn’t end up with the final column ratio values.
Figure 6 discussion could have used a bit more interpretation. e.g. it looks like BONA, SHAF have a tendency to underestimate AOD from non BB sources while CEAS-E overestimates non BB source AOD. For BB, BONA, SHAF, NHAF, NHSA, AUST are ok, but rest are biased low. Relative to Curci et al, BOAS-W, CEAM, TENA, SEAS, CEAS-E, SHSA are probably significant above bias levels that could be associated with the method of AOD calculation, which is an important point to make. Suggest adding something to the figure like a horizontal arrow showing the direction of increasing uncertainty in the interpretation (I got the impression from the previous section that A group is the highest, D group is the lowest; would be nice to have a visual cue on the Figure itself).
Figure 6b. Note that from Curci et al, a second region could be drawn, at about 1/3.5 = 0.28 : values below this line could not be attributed to the model AOD calculation methodology and model assumptions regarding mixing state, assuming AOD is at about 440 nm (somewhere please state the AOD wavelength). So BOAS_W, CEAM, TENA, SEAS, CEAS_E, CEAS_W. have biases beyond what might be expected from AOD calculation methodology, which is a useful thing to be able to say.
Line 413: See my above comments regarding the relative role that the AOD calculations and mixing state assumptions might have - this could reduce the list a bit further to include those where the differences are larger than calculation methodology could account for the changes.
Line 435: “properties used in the model.” This doesn't surprise me, given Curci et al; range of a factor of 2 to 3 in model results depending on these assumptions - and added to that, there is the range of emitted values resulting from the OA/OC ratio employed in each model.
Can the authors modify the diagram to indicate the relative impact of the OA/OC ratio on the diversity?
Line 442: “properties used in each model” ... as well as the assumptions used to calculate AOD in each model... should include this. Similarly, line 445, “mixing state” should be added to this list.
Line 468: Explain how a factor of 2 leads to a diversity of 28% (offline; doesn't have to be in the paper) - I'm curious to see how the calculation works.
Line 470: How many of the models include the formation of secondary organic aerosols, another form of OA that will be created by the organic gases released by fires? Will this influence diversity? See also my comment for line 485, below.
Line 474: General comment: It’s unfortunate that a scenario with all models being forced to use the same OA/OC ratio was not included! This variation in how the emissions were treated muddies the waters considerably. Why were the models not constrained to use the same values (if only in a scenario) – was this issue not recognized until post-simulations?
Line 479: Alternatively, do any of these models include secondary organic aerosol formation? Regional models show this can be a dominant source of OA mass.
Line 485: Ah, there we go... ok, can the relative fraction of SOA/OA in the biomass burning plumes be extracted from these models (models often speciate SOA from primary OA explicitly, so this is something the contributing modellers would have to hand)? The relative fractions would show how much this might influence diversity (and would also be a good point to mention with respect to AOD underestimates - if the SOA/OA fraction is large, and AOD is being underestimated by the models lacking SOA, then adding SOA is a reasonable recommendation.
Line 490: this is about the MEE range you might expect from Curci et al 2015.
Line 495-496 list: Also constrained by the aerosol mixing state assumptions and the extent to which aerosols might be assumed to be core/shell versus homogeneous mixtures <-- Add these to the list.
Line 515: Yes - one problem is cases where seasonality of the different sources of aerosols differ. It might be better in that context rather than averaging and generating differences between BB and non-BB runs, to subtract off the background on a finer time scale.
Line 520: Another possibility that the authors might want to consider (for future work) is using other co-located and co-emitted satellite BB species such as CO emissions to identify the BB cells, and use finer time resolution for removal of background (e.g. background values just before fires used for removal of background during fires).
Line 532: See my earlier comments - a more rigorous constraint based on the variability associated with the methodology for calculating AOD alone could/should be applied here. There are a group of models and regions for which the biases relative to observations are larger than the range of biases Curci et al (2015) saw for AOD at 440 nm. It’s the factor of 10 cases (and the ones less 28% or so) that are key.
Line 544: Yes - one thing I'm wondering is whether this is an "emission factor" problem (that is, the mass of PM emissions per kg biomass burned is wrong), or if the estimate of the burned area is wrong. Do other emitted species such as CO have the same biases?
Line 551-552: Surely some additional constraints such as requiring all models to use the same OA/OC ratio in the emissions should also be mentioned here?
Line 570: Maybe add, “For example, we have shown that the choice of an OA/OC ratio, which varies considerably between the models, can have a significant impact on subsequent AOD predictions.”
Line 576: Mixing state assumptions, etc. - the manner in which AOD is calculated also contributes to the variability.
Citation: https://doi.org/10.5194/egusphere-2024-1487-RC2 -
RC3: 'Comment on egusphere-2024-1487', Anonymous Referee #3, 16 Jul 2024
Tables S1 and S2 are useful information. Please consider move it to the main text or Appendix so that it appears in the main paper.
Line 88 “Not surprisingly, there are significant discrepancies among the different estimates of BB aerosol source strength” Please add the following reference:
Wiedinmyer, C., Kimura, Y., McDonald-Buller, E. C., Emmons, L. K., Buchholz, R. R., Tang, W., Seto, K., Joseph, M. B., Barsanti, K. C., Carlton, A. G., and Yokelson, R.: The Fire Inventory from NCAR version 2.5: an updated global fire emissions model for climate and chemistry applications, Geosci. Model Dev., 16, 3873–3891, https://doi.org/10.5194/gmd-16-3873-2023, 2023.
Line 128 and Line 134: This paragraph is a bit confusing. The goal of this paper is to constrain smoke strength? Or inter-model comparison to understand model processes that impact model AOD? Or to evaluate emission inventories and injection height? Or three of them? Did you evaluate injection height in this study? All these processes are subject to uncertainties. In conclusions it says “we explored in some detail the strengths and limitations of the P2017”. You also draw a few conclusions for GFED3.1. However you have to assume the P2017 approach is valid in order to draw these conclusions on GFED3.1 and the model performances.
I understand it’s not possible to re-run all the simulations with GFED4.1s. But please discuss the potential impact of using GFED3.1 rather than GFED4.1s since the two versions are different and GFED4.1s includes more small fires. And you do have low small-fires correction.
In the introduction, the description of P2012 and P2017 are too long. Please shorten it and move some content to section 2.
Figure 5: Are these cases all based on 1-day MODIS AOD dataset? But even for the same region, during different seasons the plume might meet different criteria and be classified to different groups. And even without considering seasonality, are all the plumes in the same region have similar features and fall into same groups?
Figure 6: Get rid of the red wavy lines in Figure legend.
Line 409: Using GFED3.1 for inter-model comparison study to understand model uncertainties is still reasonable. However, studying the GFED3.1 emission inventory itself seems outdated.
Figure 7: add legend.
Section 4 is more interesting and I suggest to expand it more and shorten Section 3.
Before doing these analyses, a general evaluation of model AOD from these models are needed. It will be helpful to directly evaluate these models with MODIS AOD product (more than the AOD ratio model/MODIS shown in Figure 6b). This can be included in supplement to at least provide some information on how these models perform.
Citation: https://doi.org/10.5194/egusphere-2024-1487-RC3 -
AC1: 'Response to Reviewers on egusphere-2024-1487', Mariya Petrenko, 14 Oct 2024
Dear ACP editorial board and anonymous Reviewers,
We thank you for your thoughtful and helpful comments. We addressed each of them as discussed in the Response to Reviewers file.
The pointers to line of the text refer to the document "PetrenkoEtal_2024_ACP_wRevisions_resubmit_clean.pdf".
The difference between the original submission and the current revised version can be traced through tracked changes in "PetrenkoEtal_2024_ACP_wRevisions_resubmit_trackChanges.docx".
Thank you,
Mariya Petrenko, Ralph Kahn, Mian Chin and all other co-authors of the manuscript.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
426 | 115 | 30 | 571 | 39 | 17 | 25 |
- HTML: 426
- PDF: 115
- XML: 30
- Total: 571
- Supplement: 39
- BibTeX: 17
- EndNote: 25
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1