the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluating WRF-GC v2.0 predictions of boundary layer and vertical ozone profiles during the 2021 TRACER-AQ campaign in Houston, Texas
Abstract. The Tracking Aerosol Convection Experiment Air Quality (TRACER-AQ) campaign probed Houston air quality with a comprehensive suite of ground-based and airborne remote sensing measurements during the intensive operating period in September 2021. Two post-frontal high-ozone episodes (September 6–11 and 23–26) were recorded during the said period. In this study, we evaluated the simulation of the planetary boundary layer (PBL) height and the vertical ozone profile by a high-resolution (1.33 km) 3-D photochemical model, Weather Research and Forecasting (WRF)-driven GEOS-Chem (WRF-GC). We contrasted the model performance between ozone-episode days and non-episode days. The model captures the diurnal variations of the PBL during ozone episodes (R = 0.72–0.77; normal mean bias (NMB) = 3 %–22 %) and non-episode days (R = 0.88; NMB = -21 %), compared with the ceilometer at La Porte. Land-water differences in PBL heights are captured better during non-episode days than episode days, compared with the airborne High Spectral Resolution Lidar-2 (HSRL-2). During ozone episodes, the simulated land-water differences are 50–60 m (morning), 320–520 m (noon), and 440–560 m (afternoon) in comparison with the observed values of 190 m, 130 m, and 260 m, respectively. During non-episode days, the simulated land-water differences are 140–220 m (morning) and 360–760 m (noon) in comparison with the observed values of 210 m and 420 m, respectively. For vertical ozone distributions, the model was evaluated against vertical profile measurements from the Tropospheric Ozone lidar (TROPOZ), the HSRL-2, and ozonesondes, as well as at the surface from a model 49i ozone analyzer and a site from the Continuous Ambient Monitoring Stations (CAMS) at La Porte. The model underestimates free tropospheric ozone (2–3 km aloft) by 9 %–22 % but overestimates near-ground ozone (< 50 m aloft) by 6 %–39 % during the two ozone episodes. Boundary layer ozone (0.5–1 km aloft) is underestimated by 1 %–11 % during September 8–11 but overestimated by 0 %–7 % during September 23–26. Based on these evaluations, we identified two model limitations: the single-layer PBL representation and free tropospheric ozone underestimation. These limitations have implications for the predictivity of ozone’s vertical mixing and distribution in other models.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(2240 KB)
-
Supplement
(1450 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2240 KB) - Metadata XML
-
Supplement
(1450 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-892', Anonymous Referee #1, 12 Jul 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-892/egusphere-2023-892-RC1-supplement.pdf
-
RC2: 'Comment on egusphere-2023-892', Anonymous Referee #2, 18 Jul 2023
This manuscript presents an evaluation of the WRF-GC model's performance in simulating the transport and transformation of atmospheric ozone in the Houston area during the TRACER-AQ campaign covering two ozone episodes in September 2021. The study established a model framework for atmospheric ozone in the Houston area and compared the modeled PBLH (as well as other meteorological parameters) and ozone profile to measurements, highlighting the limitations of ozone simulation and providing valuable insights for future ozone prediction. This aligns well with the scope of Geoscientific Model Development (GMD).
The manuscript is well-written, well-structured and well-referenced, drawing on relevant literature. The authors have conducted a rigorous analysis based on comprehensive observation data. The model is clearly described and observations are introduced in detail. However, some of the methods employed could benefit from further explanation to enhance the clarity of the research process and the presentation of some results seems to have room for improvement. This leads us to a more detailed discussion on specific scientific questions and issues raised by the study:
- In Page 7 Line 40, simulations with various PBL schemes are chosen in the main analysis. Could you briefly explain why you are interested in the performance of PBL schemes most? [HRRR] is said to be the best, but according to Table S3 and S4, [Nudged] and [Re-init] have higher correlation coefficients and lower errors. Does the “best” mean the best among various meteorological drivers? Why [HRRR], instead of [Nudged] and [Re-init], is chosen in the main analysis?
- In Figure 3f, it is evident that the model misses the high wind speeds during the day and overestimates the wind speeds at night. However, the fitness (R) is similar to that in Figure 3e, which shows a higher degree of agreement. Is there a reasonable explanation for this? Are the correlation coefficients and NMBs in Figure 3 calculated by the diurnal mean (24 simulations vs 24 observations) or original records? If the former, are there differences between the bias and fitness calculated from the original data and those presented in the study? Does taking the average of wind directions provide meaningful information?
- When a large amount of similar data is densely listed in the main text (e.g. Page 12 Line 7-14, Page 14 Line 14-21), is the presence of these data all necessary and supporting a particular conclusion? Is there a more concise and clear way to present the data instead of listing it?
- In Section 4.1, the evaluation of PBLH derived from the ceilometer uses the correlation coefficient R and NMB as indicators. However, in Section 4.2, when evaluating with mixed layer heights from HSRL-2, the metrics switch to Bias and RMS. Is the change in evaluation metrics necessary, and if so, what is the reason behind this?
- In the comparison of ozone, the focus is mainly on bias rather than the correlation coefficient. Could this potentially lead to an insufficient evaluation of the simulation of temporal variations in ozone?
- In Figure S2, only wind directions are shown with correlation coefficients. Are they considered as continuous variables? For example, are 0° and 359° treated as match or mismatch? Could you also include one of the performance index (e.g. R) in the figures of other variables?
- Again, in Figure S3d and S5d, wind direction is shown in the same way as the other parameters, which results in some small differences, such as between 0° and 359°, appearing large in the figure.
- Text S4 and Figure S6 shows a group of observations. What conclusions did you make from them? Why are they presented in this paper? Have they been compared with the modeled profile, as TROPOZ?
- We have known from this study the performance and limitations of the modeling system, but I would like to suggest that the paper is heavy on conclusions but light on discussion. I recommend expanding the discussion to provide more context and interpretation of the findings.
Some technical corrections or typos:
Page 6 Figure 1: Font sizes of the latitude and longitude labels are inconsistent. The labels on the left subfigures are smaller and blurry, which affects readability.
Page 8 Line 17: “MNYY” should be MYNN
Page 9 Figure 2: If the colorbars for the three subfigures are identical, there is no need to display them three times. Having separate colorbars for each figure could lead to the misconception that the scales are different for each one.
Page 13 Figure 4-5: Ensure consistency in subheadings, y-axis titles, etc. What is the difference between the y-axis “Height” and “Altitude”? Ensure that the text on the figures is clear and legible.
Figures S3 and S5: the overlapping lines hinder readability.
In general, the quality of the figures could be improved. The resolution is too low or the font size is too small in some figures, which affects readability, especially in the Supplementary Information (e.g., Figure S2). There is a lack of consistency in the font and font size, as well as the titles and labels across subfigures. Some figures are missing necessary axis titles or colorbar descriptions, and some figures have subfigures that are not aligned.
Citation: https://doi.org/10.5194/egusphere-2023-892-RC2 -
AC1: 'Comment on egusphere-2023-892', Xueying Liu, 16 Aug 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-892/egusphere-2023-892-AC1-supplement.pdf
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-892', Anonymous Referee #1, 12 Jul 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-892/egusphere-2023-892-RC1-supplement.pdf
-
RC2: 'Comment on egusphere-2023-892', Anonymous Referee #2, 18 Jul 2023
This manuscript presents an evaluation of the WRF-GC model's performance in simulating the transport and transformation of atmospheric ozone in the Houston area during the TRACER-AQ campaign covering two ozone episodes in September 2021. The study established a model framework for atmospheric ozone in the Houston area and compared the modeled PBLH (as well as other meteorological parameters) and ozone profile to measurements, highlighting the limitations of ozone simulation and providing valuable insights for future ozone prediction. This aligns well with the scope of Geoscientific Model Development (GMD).
The manuscript is well-written, well-structured and well-referenced, drawing on relevant literature. The authors have conducted a rigorous analysis based on comprehensive observation data. The model is clearly described and observations are introduced in detail. However, some of the methods employed could benefit from further explanation to enhance the clarity of the research process and the presentation of some results seems to have room for improvement. This leads us to a more detailed discussion on specific scientific questions and issues raised by the study:
- In Page 7 Line 40, simulations with various PBL schemes are chosen in the main analysis. Could you briefly explain why you are interested in the performance of PBL schemes most? [HRRR] is said to be the best, but according to Table S3 and S4, [Nudged] and [Re-init] have higher correlation coefficients and lower errors. Does the “best” mean the best among various meteorological drivers? Why [HRRR], instead of [Nudged] and [Re-init], is chosen in the main analysis?
- In Figure 3f, it is evident that the model misses the high wind speeds during the day and overestimates the wind speeds at night. However, the fitness (R) is similar to that in Figure 3e, which shows a higher degree of agreement. Is there a reasonable explanation for this? Are the correlation coefficients and NMBs in Figure 3 calculated by the diurnal mean (24 simulations vs 24 observations) or original records? If the former, are there differences between the bias and fitness calculated from the original data and those presented in the study? Does taking the average of wind directions provide meaningful information?
- When a large amount of similar data is densely listed in the main text (e.g. Page 12 Line 7-14, Page 14 Line 14-21), is the presence of these data all necessary and supporting a particular conclusion? Is there a more concise and clear way to present the data instead of listing it?
- In Section 4.1, the evaluation of PBLH derived from the ceilometer uses the correlation coefficient R and NMB as indicators. However, in Section 4.2, when evaluating with mixed layer heights from HSRL-2, the metrics switch to Bias and RMS. Is the change in evaluation metrics necessary, and if so, what is the reason behind this?
- In the comparison of ozone, the focus is mainly on bias rather than the correlation coefficient. Could this potentially lead to an insufficient evaluation of the simulation of temporal variations in ozone?
- In Figure S2, only wind directions are shown with correlation coefficients. Are they considered as continuous variables? For example, are 0° and 359° treated as match or mismatch? Could you also include one of the performance index (e.g. R) in the figures of other variables?
- Again, in Figure S3d and S5d, wind direction is shown in the same way as the other parameters, which results in some small differences, such as between 0° and 359°, appearing large in the figure.
- Text S4 and Figure S6 shows a group of observations. What conclusions did you make from them? Why are they presented in this paper? Have they been compared with the modeled profile, as TROPOZ?
- We have known from this study the performance and limitations of the modeling system, but I would like to suggest that the paper is heavy on conclusions but light on discussion. I recommend expanding the discussion to provide more context and interpretation of the findings.
Some technical corrections or typos:
Page 6 Figure 1: Font sizes of the latitude and longitude labels are inconsistent. The labels on the left subfigures are smaller and blurry, which affects readability.
Page 8 Line 17: “MNYY” should be MYNN
Page 9 Figure 2: If the colorbars for the three subfigures are identical, there is no need to display them three times. Having separate colorbars for each figure could lead to the misconception that the scales are different for each one.
Page 13 Figure 4-5: Ensure consistency in subheadings, y-axis titles, etc. What is the difference between the y-axis “Height” and “Altitude”? Ensure that the text on the figures is clear and legible.
Figures S3 and S5: the overlapping lines hinder readability.
In general, the quality of the figures could be improved. The resolution is too low or the font size is too small in some figures, which affects readability, especially in the Supplementary Information (e.g., Figure S2). There is a lack of consistency in the font and font size, as well as the titles and labels across subfigures. Some figures are missing necessary axis titles or colorbar descriptions, and some figures have subfigures that are not aligned.
Citation: https://doi.org/10.5194/egusphere-2023-892-RC2 -
AC1: 'Comment on egusphere-2023-892', Xueying Liu, 16 Aug 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-892/egusphere-2023-892-AC1-supplement.pdf
Peer review completion
Journal article(s) based on this preprint
Data sets
Data for 'Evaluating WRF-GC v2.0 predictions of boundary layer and vertical ozone profiles during the 2021 TRACER-AQ campaign in Houston, Texas' Xueying Liu https://doi.org/10.5281/zenodo.7983449
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
307 | 101 | 15 | 423 | 36 | 5 | 5 |
- HTML: 307
- PDF: 101
- XML: 15
- Total: 423
- Supplement: 36
- BibTeX: 5
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
Xueying Liu
Shailaja Wasti
Ehsan Soleimanian
James Flynn
Travis Griggs
Sergio Alvarez
John T. Sullivan
Maurice Roots
Laurence Twigg
Guillaume Gronoff
Timothy Berkoff
Paul Walter
Mark Estes
Johnathan W. Hair
Taylor Shingler
Amy Jo Scarino
Marta Fenn
Laura Judd
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(2240 KB) - Metadata XML
-
Supplement
(1450 KB) - BibTeX
- EndNote
- Final revised paper