the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Assessment of smoke plume height products derived from multisource satellite observations for wildfire in the western US
Abstract. As wildfires intensify and fire seasons lengthen across the western U.S., the development of applicable models that can predict the density of smoke plumes and track wildfire-induced air pollution exposures has become critical. Wildfire smoke plume height is a key indicator of the vertical placement of plume mass emitted from wildfire-related aerosol sources in climate and air quality models. With advancements in Earth observation (EO) satellites, spaceborne products for aerosol layer height or plume injection height have recently emerged with increased global-scale spatiotemporal resolution. However, to evaluate column radiative effects and refine satellite algorithms, vertical profiles of regionally representative aerosol data from wildfire emissions need to be measured directly in the field. In this study, we conduct the first comprehensive evaluation of four passive satellite remote sensing techniques specifically designed to retrieve plume height distribution for wildfire smoke. We compare these satellite products with the airborne Wyoming Cloud Lidar (WCL) measurements during the 2018 Biomass Burning Flux Measurements of Trace Gases and Aerosols (BB-FLUX) field campaign in the western U.S. Two definitions, namely “plume top” and “extinction-weighted mean plume height”, are used to derive representative heights of wildfire smoke plumes, based on the WCL-retrieved vertical aerosol extinction coefficient profiles. We also perform a comparative analysis of multisource satellite-derived plume height products for wildfire smoke using these two definitions. With the aim to discuss which satellite product is most appropriate under various aerosol loadings and in determining plume height characteristics near a fire-event location or downwind plume rise equivalent height. Our findings highlight the importance of understanding the sensitivity of different passive remote sensing techniques to space-based wildfire smoke plume height observations, in order to resolve ambiguity surrounding the concept of “effective smoke plume height”. As additional aerosol-observing satellites are expected to be launched in the coming years, our results will inform future remote sensing missions and EO data selection. This will help bridge the gap between satellite observations and plume rise modeling to further investigate the vertical distribution of wildfire smoke aerosols.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(5142 KB)
-
Supplement
(9576 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5142 KB) - Metadata XML
-
Supplement
(9576 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1658', Anonymous Referee #1, 23 Oct 2023
In this study, the authors compare upward facing lidar measurements of smoke plume characteristics from an aircraft campaign with plume height products derived from passive satellite remote sensing. To do so, they first establish two definitions of plume height, a plume top and an extinction-weighted smoke mean plume height. This is important because what is meant by plume top or plume injection height is not clear in the literature and differs by satellite product.
This work is a useful contribution to the field and is generally rigorous. While this topic is complex and multidimensional, the authors make it somewhat more difficult to understand with overly complex language and a few non-standard graphical choices. The lack of CALIPSO data in the study is surprising and should be explained. Otherwise, most of the comments below address clarity.
Line 90:
This is the only mention of CALIPSO and CATS in the manuscript. The authors should address why they were not included in the analysis. The narrow swath is not justification enough, and the lack of diurnal variation is shared by many of the passive products included in the analysis. If there were not enough coincident overpasses to make meaningful statistics, that would be reason to exclude them, and would highlight the difficulty of using the spaceborne lidar products in direct comparison with aircraft campaigns.
Line 98:
The word “hence” here is not needed because this sentence does not follow from the previous. There are many such small grammatical issues in the draft. I have identified a few, but certainly not all. The manuscript should be reviewed by an experienced technical editor.
Line 147:
Take out “high precision.” It is not defined and unclear why the MISR retrievals are higher precision than the other sensors.
Line 170:
Remove “subsequent”
Line 274:
Does this method of plume top determination work also for modeled aerosol data? It would be very useful if the same definition could be used for both model and satellite plumes. Gridded models typically exhibit exponential decay of aerosol concentrations as they go up in vertical layers. Would this calculation derive sensible SPH(top) in that case?
Line 308:
These two paragraphs are confusing. I’d suggest you start with something like:
Because of the differences in spatial resolution, we developed two methods for collocating lidar measurements with satellite-derived SPH products, one for the finer resolution MODIS and MISR data and one for the coarser resolution VIIRS/TROPOMI. Then go on and explain the two methods.
Line 404:
“upward-facing” lidar profiles
Line 406:
It is very difficult for the reader to interpret this figure for fire/plume/atmosphere characteristics because those are not given in the plot. Rather, there is a flight name and that can be cross-referenced with the details given in table 2. What I take from Figure 2b is that the difference between SPH(top) and SPH(ext) is often greater within a single plume than the differences among different plumes when comparing the same metric. This seems important!
Line 415:
Couldn’t SPH(ext) also be underestimated by the lidar under conditions of optically thick plumes?
Line 425:
While 1000 acres is sometimes used as a definition for a large wildfire, I don’t think it makes sense in this context. First, your data set consists of fires that are all at least an order of magnitude larger than that. Also, is there evidence of many fires of ~1000 acres that produced plumes lofted to the free troposphere or stratosphere?
Line 460:
I find this plot very hard to decode. The use of the negative and positive x-axis to distinguish morning and afternoon makes it difficult to compare morning/afternoon pairs directly. Also, the use of flight codes instead of fire names makes it hard to know how individual observations are related. Your readers were not on the flight campaign. I would suggest using multiple panels, with each panel representing a single fire. Use color or location to indicate morning vs. afternoon and put all heights on the same axis.
Line 531:
Remove the word “activity.”
Line 599:
Figure 6 seems to be punchline and most important part of the study. I wish it had come sooner in the paper. I would put it at the beginning of the results section.
Line 618:
Remove the work “numeric.”
Citation: https://doi.org/10.5194/egusphere-2023-1658-RC1 -
RC2: 'Comment on egusphere-2023-1658', Anonymous Referee #2, 28 Oct 2023
This study seeks to first provide standard definitions of lidar-derived smoke plume height, the plume top height and the effective plume height, which is based on an extinction-weighted mean. Then, 4 different smoke plume height retrieval algorithms based on passive space-borne remote sensing instruments are evaluated against each of the lidar smoke plume height metrics in order to determine which metric is a better evaluator for each product.
I believe the core science is good and worthy of publication, however I feel that numerous issues detract from the focus and in the end will leave readers somewhat confused. My comments and suggestions are described below, which generally relate to conciseness and clarity.
Major comments:
Please include more background information on the errors and biases (including sampling bias) of the four algorithms.
For example, you mention that the MODIS/MAIAC algorithm has a maximum plume height of 10km and requires AOD470 > 0.8. I’d like to see some brief discussion (w/ references) on how these requirements limit available smoke plume sample, i.e. what percentage of smoke plumes is MODIS/MAIAC useable for? I realize that the point of this paper is a direct comparison of all of these products, but background information on previous validation/verification of these products is still required. While comparisons of MAIAC and ALH with MISR and CALIOP are mentioned, I’d like more and with quantitative discussion included (Side note – if the cited studies involving comparisons with MISR are in fact with the same MERLIN product used here [admittedly I do not know], then even *more* detail should be provided). The same goes for MERLIN and ASHE – no mention of retrieval uncertainty, errors, or biases from literature. This is critical information when comparing four different retrieval methods from different sensors, especially considering that each of the instruments may just be “seeing” different things differently (i.e. the fundamental differences between a weighted mean height based on extinction at a given wavelength described for the lidar data in the study vs passive-based methods that fundamentally rely on some effective emission height of the aerosol layer at various wavelengths, not even getting into the homogeneity of the smoke layers microphysical properties). I think table 1 would be a good place for some of this information (at least the given retrieval uncertainties).
Section 3.2 is very confusing and needs to be rewritten.
Explaining precisely how collocation between multiple platforms is done is paramount, even more so when comparing specifically in situ and satellite data. This section seemed very out-of-order to me. The first paragraph is good. After that, start simply – tell me which dataset you are starting from (i.e. aircraft or satellite), the temporal and spatial ranges for collocation, and then how multiple matches are handled. Then go into caveats, exceptions, etc. A figure showing an example of how collocation is done, especially given that there are two collocation methods used, should be included.
The discussion on boundary layer height within section 4.1 seems out of place in this paper and distracts from the focus, which is largely the comparison between the SPH products with the aircraft lidar. Especially considering you only went halfway with the modeling and did not use a finer resolution coupled fire/atmosphere model (I understand why you would not want to – it’s well beyond the scope here). The portion on page 16 in particular just seems to meander without much of a point. Either tighten it up by justifying why it’s there and explicitly describing what we’ve gained by that portion of the work or remove it.
Also, writing can be improved, most often by simplifying - e.g. “…make it work…” on line 80 is too informal, “…with high precision…” on line 147 is unnecessary, Line 239 “It has been recognized…” can be changed to “It is…” and the rest of the sentence adjusted accordingly. Line 451 “It was accidentally begun by a crashed helicopter…” is unnecessary. These are only a few examples. Please revise the manuscript with conciseness, phrasing, and sentence structure in mind.
Minor comments:
Line 43: I’m assuming based on the context following that by “remote sensing” you mean specifically passive remote sensing, because space-borne active sensor such as CALIOP can (could?) retrieve a layered height could it not? Please be specific. Also, if you *are* referring to passive remote sensing, please change “observations” on line 44 to “retrievals” as passive remote sensing instruments do not directly observe heights.
Line 80: This sentence is poorly constructed
Likes 87-90: I’m not quite sure why you just from SPH retrieval to fire detection – these are fundamentally different remote sensing problems.
Like 110: “…of which SPH definition can effectively interpret a specific satellite SPH retrieval algorithm” is confusing, perhaps due to word choice.
Table 1. Please include the proper references for each algorithm within the table.
Line 133: What exactly is unique about MODIS’ ability to detect fires?
Line 150: You mention “…wealth of data collected…over two decades…” here, which is contradictory to the data availability of the MISR listed in the “time period” column of table 1.
Line 175: There are now 3 VIIRS sensors in space – why is the revisit time listed here as 12 hours? (I see now this is because of the field campaign limiting the study period. This information should be mentioned, if not fully provided, to prevent confusion. This requires moving table 2 up.)
Line 220: I can guess why prescribed and small fires are excluded from this study but you do not want to leave readers guessing – please provide a sentence explaining why.
Line 239: This sentence needs re-writing.
Line 253: I believe your use of a superscript in a parameter name (SPHTOP) is the first I’ve ever seen, or at least can remember. Unless you have a reason for doing so, I recommend going with standard procedure and switching it to SPHTOP. Same goes for SPHext.
Lines 309-313. This section is quite unclear and specifically I do not understand what you mean by saying that VIIRS/ASHE has a “temporal duration” of 6 minutes.
Line 356: “fire plume”?
Line 359: So the WCL signal is attenuated by thick smoke, but many of the satellite retrievals require optically thick smoke layers – at what optical thickness does lidar attenuation become an issue?
Figure 2. Include SPHTop and SPHext as figure titles or at least as a label somewhere on the top two panels. Also, is it necessary to plot them separately and thus making it more difficult to directly compare them? I’d suggest doing something like merging them into one panel, removing the CDF as it’s basically redundant to the PDF lines, and then using dotting/dashing to indicate the ratios. That way both methods can be shown on the same axis and compared more easily. Even though they are not really intended for comparing with each other, I still think it’s important to be able to quickly visualize the differences between them and plotting them on a different axis precludes this. If sticking with two separate panels, they should be labeled “(a)” and “(b)”, as they are separate panels.
Line 425: 1000 acres is approximately equal to 4 km2, which is about one singular pixel from an instrument such as MODIS. Showing acres burned in table 3 but using acres burning for the definition of “large wildfire” is confusing at first glance. Maybe change the information in table 3 to something like the maximum concurrent acres burning.
Figure 4: the orange color of the MISR/MERLIN line needs to be changed – it is extremely difficult to see against the extinction.
Figure 5: the pink lines are too similarly colored.
Line 480: “It is not advisable to use the MODIS-Aqua/MAIAC product for estimating downwind SPH due to its suboptimal performance in such scenarios” I think this is an overly aggressive statement based on one case.
Figure 6. Please include values like r2 and rmse on the panels – going back and forth between figure 6 and the previous tables is cumbersome.
Citation: https://doi.org/10.5194/egusphere-2023-1658-RC2 - AC1: 'Response to Reviewers', Jingting Huang, 07 Dec 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-1658', Anonymous Referee #1, 23 Oct 2023
In this study, the authors compare upward facing lidar measurements of smoke plume characteristics from an aircraft campaign with plume height products derived from passive satellite remote sensing. To do so, they first establish two definitions of plume height, a plume top and an extinction-weighted smoke mean plume height. This is important because what is meant by plume top or plume injection height is not clear in the literature and differs by satellite product.
This work is a useful contribution to the field and is generally rigorous. While this topic is complex and multidimensional, the authors make it somewhat more difficult to understand with overly complex language and a few non-standard graphical choices. The lack of CALIPSO data in the study is surprising and should be explained. Otherwise, most of the comments below address clarity.
Line 90:
This is the only mention of CALIPSO and CATS in the manuscript. The authors should address why they were not included in the analysis. The narrow swath is not justification enough, and the lack of diurnal variation is shared by many of the passive products included in the analysis. If there were not enough coincident overpasses to make meaningful statistics, that would be reason to exclude them, and would highlight the difficulty of using the spaceborne lidar products in direct comparison with aircraft campaigns.
Line 98:
The word “hence” here is not needed because this sentence does not follow from the previous. There are many such small grammatical issues in the draft. I have identified a few, but certainly not all. The manuscript should be reviewed by an experienced technical editor.
Line 147:
Take out “high precision.” It is not defined and unclear why the MISR retrievals are higher precision than the other sensors.
Line 170:
Remove “subsequent”
Line 274:
Does this method of plume top determination work also for modeled aerosol data? It would be very useful if the same definition could be used for both model and satellite plumes. Gridded models typically exhibit exponential decay of aerosol concentrations as they go up in vertical layers. Would this calculation derive sensible SPH(top) in that case?
Line 308:
These two paragraphs are confusing. I’d suggest you start with something like:
Because of the differences in spatial resolution, we developed two methods for collocating lidar measurements with satellite-derived SPH products, one for the finer resolution MODIS and MISR data and one for the coarser resolution VIIRS/TROPOMI. Then go on and explain the two methods.
Line 404:
“upward-facing” lidar profiles
Line 406:
It is very difficult for the reader to interpret this figure for fire/plume/atmosphere characteristics because those are not given in the plot. Rather, there is a flight name and that can be cross-referenced with the details given in table 2. What I take from Figure 2b is that the difference between SPH(top) and SPH(ext) is often greater within a single plume than the differences among different plumes when comparing the same metric. This seems important!
Line 415:
Couldn’t SPH(ext) also be underestimated by the lidar under conditions of optically thick plumes?
Line 425:
While 1000 acres is sometimes used as a definition for a large wildfire, I don’t think it makes sense in this context. First, your data set consists of fires that are all at least an order of magnitude larger than that. Also, is there evidence of many fires of ~1000 acres that produced plumes lofted to the free troposphere or stratosphere?
Line 460:
I find this plot very hard to decode. The use of the negative and positive x-axis to distinguish morning and afternoon makes it difficult to compare morning/afternoon pairs directly. Also, the use of flight codes instead of fire names makes it hard to know how individual observations are related. Your readers were not on the flight campaign. I would suggest using multiple panels, with each panel representing a single fire. Use color or location to indicate morning vs. afternoon and put all heights on the same axis.
Line 531:
Remove the word “activity.”
Line 599:
Figure 6 seems to be punchline and most important part of the study. I wish it had come sooner in the paper. I would put it at the beginning of the results section.
Line 618:
Remove the work “numeric.”
Citation: https://doi.org/10.5194/egusphere-2023-1658-RC1 -
RC2: 'Comment on egusphere-2023-1658', Anonymous Referee #2, 28 Oct 2023
This study seeks to first provide standard definitions of lidar-derived smoke plume height, the plume top height and the effective plume height, which is based on an extinction-weighted mean. Then, 4 different smoke plume height retrieval algorithms based on passive space-borne remote sensing instruments are evaluated against each of the lidar smoke plume height metrics in order to determine which metric is a better evaluator for each product.
I believe the core science is good and worthy of publication, however I feel that numerous issues detract from the focus and in the end will leave readers somewhat confused. My comments and suggestions are described below, which generally relate to conciseness and clarity.
Major comments:
Please include more background information on the errors and biases (including sampling bias) of the four algorithms.
For example, you mention that the MODIS/MAIAC algorithm has a maximum plume height of 10km and requires AOD470 > 0.8. I’d like to see some brief discussion (w/ references) on how these requirements limit available smoke plume sample, i.e. what percentage of smoke plumes is MODIS/MAIAC useable for? I realize that the point of this paper is a direct comparison of all of these products, but background information on previous validation/verification of these products is still required. While comparisons of MAIAC and ALH with MISR and CALIOP are mentioned, I’d like more and with quantitative discussion included (Side note – if the cited studies involving comparisons with MISR are in fact with the same MERLIN product used here [admittedly I do not know], then even *more* detail should be provided). The same goes for MERLIN and ASHE – no mention of retrieval uncertainty, errors, or biases from literature. This is critical information when comparing four different retrieval methods from different sensors, especially considering that each of the instruments may just be “seeing” different things differently (i.e. the fundamental differences between a weighted mean height based on extinction at a given wavelength described for the lidar data in the study vs passive-based methods that fundamentally rely on some effective emission height of the aerosol layer at various wavelengths, not even getting into the homogeneity of the smoke layers microphysical properties). I think table 1 would be a good place for some of this information (at least the given retrieval uncertainties).
Section 3.2 is very confusing and needs to be rewritten.
Explaining precisely how collocation between multiple platforms is done is paramount, even more so when comparing specifically in situ and satellite data. This section seemed very out-of-order to me. The first paragraph is good. After that, start simply – tell me which dataset you are starting from (i.e. aircraft or satellite), the temporal and spatial ranges for collocation, and then how multiple matches are handled. Then go into caveats, exceptions, etc. A figure showing an example of how collocation is done, especially given that there are two collocation methods used, should be included.
The discussion on boundary layer height within section 4.1 seems out of place in this paper and distracts from the focus, which is largely the comparison between the SPH products with the aircraft lidar. Especially considering you only went halfway with the modeling and did not use a finer resolution coupled fire/atmosphere model (I understand why you would not want to – it’s well beyond the scope here). The portion on page 16 in particular just seems to meander without much of a point. Either tighten it up by justifying why it’s there and explicitly describing what we’ve gained by that portion of the work or remove it.
Also, writing can be improved, most often by simplifying - e.g. “…make it work…” on line 80 is too informal, “…with high precision…” on line 147 is unnecessary, Line 239 “It has been recognized…” can be changed to “It is…” and the rest of the sentence adjusted accordingly. Line 451 “It was accidentally begun by a crashed helicopter…” is unnecessary. These are only a few examples. Please revise the manuscript with conciseness, phrasing, and sentence structure in mind.
Minor comments:
Line 43: I’m assuming based on the context following that by “remote sensing” you mean specifically passive remote sensing, because space-borne active sensor such as CALIOP can (could?) retrieve a layered height could it not? Please be specific. Also, if you *are* referring to passive remote sensing, please change “observations” on line 44 to “retrievals” as passive remote sensing instruments do not directly observe heights.
Line 80: This sentence is poorly constructed
Likes 87-90: I’m not quite sure why you just from SPH retrieval to fire detection – these are fundamentally different remote sensing problems.
Like 110: “…of which SPH definition can effectively interpret a specific satellite SPH retrieval algorithm” is confusing, perhaps due to word choice.
Table 1. Please include the proper references for each algorithm within the table.
Line 133: What exactly is unique about MODIS’ ability to detect fires?
Line 150: You mention “…wealth of data collected…over two decades…” here, which is contradictory to the data availability of the MISR listed in the “time period” column of table 1.
Line 175: There are now 3 VIIRS sensors in space – why is the revisit time listed here as 12 hours? (I see now this is because of the field campaign limiting the study period. This information should be mentioned, if not fully provided, to prevent confusion. This requires moving table 2 up.)
Line 220: I can guess why prescribed and small fires are excluded from this study but you do not want to leave readers guessing – please provide a sentence explaining why.
Line 239: This sentence needs re-writing.
Line 253: I believe your use of a superscript in a parameter name (SPHTOP) is the first I’ve ever seen, or at least can remember. Unless you have a reason for doing so, I recommend going with standard procedure and switching it to SPHTOP. Same goes for SPHext.
Lines 309-313. This section is quite unclear and specifically I do not understand what you mean by saying that VIIRS/ASHE has a “temporal duration” of 6 minutes.
Line 356: “fire plume”?
Line 359: So the WCL signal is attenuated by thick smoke, but many of the satellite retrievals require optically thick smoke layers – at what optical thickness does lidar attenuation become an issue?
Figure 2. Include SPHTop and SPHext as figure titles or at least as a label somewhere on the top two panels. Also, is it necessary to plot them separately and thus making it more difficult to directly compare them? I’d suggest doing something like merging them into one panel, removing the CDF as it’s basically redundant to the PDF lines, and then using dotting/dashing to indicate the ratios. That way both methods can be shown on the same axis and compared more easily. Even though they are not really intended for comparing with each other, I still think it’s important to be able to quickly visualize the differences between them and plotting them on a different axis precludes this. If sticking with two separate panels, they should be labeled “(a)” and “(b)”, as they are separate panels.
Line 425: 1000 acres is approximately equal to 4 km2, which is about one singular pixel from an instrument such as MODIS. Showing acres burned in table 3 but using acres burning for the definition of “large wildfire” is confusing at first glance. Maybe change the information in table 3 to something like the maximum concurrent acres burning.
Figure 4: the orange color of the MISR/MERLIN line needs to be changed – it is extremely difficult to see against the extinction.
Figure 5: the pink lines are too similarly colored.
Line 480: “It is not advisable to use the MODIS-Aqua/MAIAC product for estimating downwind SPH due to its suboptimal performance in such scenarios” I think this is an overly aggressive statement based on one case.
Figure 6. Please include values like r2 and rmse on the panels – going back and forth between figure 6 and the previous tables is cumbersome.
Citation: https://doi.org/10.5194/egusphere-2023-1658-RC2 - AC1: 'Response to Reviewers', Jingting Huang, 07 Dec 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
527 | 176 | 21 | 724 | 52 | 14 | 17 |
- HTML: 527
- PDF: 176
- XML: 21
- Total: 724
- Supplement: 52
- BibTeX: 14
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
S. Marcela Loría-Salazar
Jaehwa Lee
Heather A. Holmes
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(5142 KB) - Metadata XML
-
Supplement
(9576 KB) - BibTeX
- EndNote
- Final revised paper