the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Meteorological modeling sensitivity to parameterizations and satellite-derived surface datasets during the 2017 Lake Michigan Ozone Study
Abstract. High-resolution simulations were performed to assess the impact of different parameterization schemes, surface initialization datasets, and analysis nudging on lower-tropospheric conditions near Lake Michigan. Simulations were run where climatological or coarse-resolution surface initialization datasets were replaced by high-resolution, real-time datasets depicting lake surface temperatures (SST), green vegetation fraction (GVF), and soil moisture and temperature (SOIL). Comparison of a baseline simulation employing a configuration similar to that used at the Environmental Protection Agency (“EPA”) to another simulation employing an alternative set of parameterization schemes (referred to as “YNT”) showed that the EPA configuration produced more accurate analyses on the outermost 12-km resolution domain, but that the YNT configuration was superior for higher-resolution nests. The diurnal evolution of the surface energy fluxes was similar in both simulations on the 12-km grid but differed greatly on the 1.3-km grid where the EPA simulation had much smaller sensible heat flux during the daytime and physically unrealistic ground heat flux. Switching to the YNT configuration led to substantial decreases in root mean square error for 2-m temperature and 2-m water vapor mixing ratio on the 1.3-km grid. Additional improvements occurred when the high-resolution satellite-derived surface datasets were incorporated into the modeling platform, with the SOIL dataset having the largest positive impact on temperature and water vapor. The GVF and SST datasets also produced more accurate temperature and water vapor analyses, but degradations in wind speed, especially when using the GVF dataset. The most accurate simulations were obtained when using the high-resolution SST and SOIL datasets and analysis nudging above 2 km AGL.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(3446 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3446 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-153', Jonathan Pleim, 17 Apr 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-153/egusphere-2023-153-RC1-supplement.pdf
- AC1: 'Reply on RC1', Jason Otkin, 13 Jun 2023
-
RC2: 'Comment on egusphere-2023-153', Anonymous Referee #2, 20 Apr 2023
In this study, the authors performed eight WRF model simulations (at 12, 4, and 1.3 km horizontal resolutions) to assess the impact of different parameterization schemes, land/lake surface initialization approaches, and analysis nudging methods on the simulated surface energy fluxes and near-surface atmospheric conditions over the Lake Michigan region during the 1-month LMOS field campaign period in 2017. Model evaluation presented in this work helped the same group to select meteorological inputs of the CMAQ simulations described in a companion paper by Pierce et al., which is also currently under review for the same journal.
My major comments include:
Novelty: Comparing Pleim-Xiu with indirect soil nudging and Noah is not a new idea-more than 7 years ago, a TCEQ funded project on such a topic was conducted. Many modeling communities including at NASA and NOAA plan to (or are already actively working on) migrating from Noah to Noah-MP land surface model due to known limitations in Noah. Initialization WRF using output from LIS or similar frameworks to benefit weather and air quality studies is not new neither, which has been recognized by the authors themselves. Running at very high-resolution over regions with complex surface types (e.g., land vs water) is also broadly appreciated by modeling communities. Although this is a companion paper of Pierce et al., it should also be able to stand alone with its own highlights. Thus the author are encouraged to clearly underscore the novel aspects of this study, and this may lead to adding some additional modeling experiments and more rigorous evaluation. And perhaps the expected paper would fit better into GMD. Otherwise, shortening the paper and merging the key information into Pierce et al. is suggested.
Methods and presentation: While a lot of information is given, clarifications on the methods are still necessary. Specifically:
- The authors stated at L222-223 that “Direct insertion into the WRF model was possible because of the similarly configured Noah LSM used in both the LIS and WRF simulations”. Please provide the version of Noah in LIS that was used in this study as well as evidence showing it’s similar to Noah embedded in WRF3.8.1. Also, this statement cannot be agreed if the static inputs (land use/land cover, soil type, terrain, etc) of the land model are consistent in LIS/Noah and WRF3.8.1. LIS and WRF3.8.1 static inputs are generated from LDT and WPS tools, respectively. Please clarify what exactly has been done. This study area has complex surface type (not only land vs water, but also for land, urban vs non-urban categories), so some discussions on how these surface characteristics are represented by the model would be very informative.
- The evaluation of the model runs are not rigorous and not well connected with Pierce et al. The uncertainty of VIIRS GVF is not introduced in the paper - this product sounds to have short latency but for retrospective analysis like it’d be important to tell its quality on a daily time scale for this region relevant to air pollution events studied. Similar comment on GLSEA. The SPoRT LIS product is not discussed clearly (it is not very clear whether land data assimilation is enabled in the LIS system and if so, some data assimilation diagnostics could be shown) - my understanding is that SPoRT hosts documentations and visualizations of these routinely generated products elsewhere which may be cited in the paper. In terms of WRF model evaluation, some statistics and maps are presented but only for a limited number of variables, and as the authors noted at L232-233, “these surface observations were also used to perform surface nudging during the EPA simulation, which will impact the results presented in Section 3 because surface nudging was not used during any of the YNT simulations”. As the model outputs served as meteorological input of CMAQ, a list of variables central to pollutants to be studied should be selected with justifications, followed by model performance of them. The performance could be discussed in connection with the air pollution events and time series presented in Pierce et al., and additional evaluation metrics such as correlations between modeled and observed time series may be added. Furthermore, are there really no in-situ flux measurements/PBL info across the entire three WRF domains as stated at L402?
- More justifications on the design of model configurations should be added: although the focus of the study is on land initialization/model and nudging, the selections of all the other physics, ICs/BCs, and the distributions of vertical layers (40 layers, is this fine enough?) to study this area should be justified, particualrly, are the Noah-related setups based on literature or recommendations from any of their partnering local agency? Also, some extended discussions on ACM2 PBL scheme vs YSU scheme and how they affect the different model runs and conclusions would be very helpful.
Minor comments:
Pleim-Xu should be Pleim-Xiu throughout the paper
Table 1: IC/LC should be IC/BC
Using SST as the short form of lake surface temperature is a little confusing
The authors defined soil moisture/soil temperature as SOIL but still use soil moisture and (/) soil temperature in multiple places
I think using “evaluation” instead of analysis in many places of this paper would be less confusing
Abstract is very descriptive and specific to this modeling experiment, rather than delivering messages that could impact a broader audience.
L208: spell out NLDAS-2
Units of Figure 2 differences plot are missing. Text in Figures 7-9 are small.
Citation: https://doi.org/10.5194/egusphere-2023-153-RC2 - AC2: 'Reply on RC2', Jason Otkin, 13 Jun 2023
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-153', Jonathan Pleim, 17 Apr 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-153/egusphere-2023-153-RC1-supplement.pdf
- AC1: 'Reply on RC1', Jason Otkin, 13 Jun 2023
-
RC2: 'Comment on egusphere-2023-153', Anonymous Referee #2, 20 Apr 2023
In this study, the authors performed eight WRF model simulations (at 12, 4, and 1.3 km horizontal resolutions) to assess the impact of different parameterization schemes, land/lake surface initialization approaches, and analysis nudging methods on the simulated surface energy fluxes and near-surface atmospheric conditions over the Lake Michigan region during the 1-month LMOS field campaign period in 2017. Model evaluation presented in this work helped the same group to select meteorological inputs of the CMAQ simulations described in a companion paper by Pierce et al., which is also currently under review for the same journal.
My major comments include:
Novelty: Comparing Pleim-Xiu with indirect soil nudging and Noah is not a new idea-more than 7 years ago, a TCEQ funded project on such a topic was conducted. Many modeling communities including at NASA and NOAA plan to (or are already actively working on) migrating from Noah to Noah-MP land surface model due to known limitations in Noah. Initialization WRF using output from LIS or similar frameworks to benefit weather and air quality studies is not new neither, which has been recognized by the authors themselves. Running at very high-resolution over regions with complex surface types (e.g., land vs water) is also broadly appreciated by modeling communities. Although this is a companion paper of Pierce et al., it should also be able to stand alone with its own highlights. Thus the author are encouraged to clearly underscore the novel aspects of this study, and this may lead to adding some additional modeling experiments and more rigorous evaluation. And perhaps the expected paper would fit better into GMD. Otherwise, shortening the paper and merging the key information into Pierce et al. is suggested.
Methods and presentation: While a lot of information is given, clarifications on the methods are still necessary. Specifically:
- The authors stated at L222-223 that “Direct insertion into the WRF model was possible because of the similarly configured Noah LSM used in both the LIS and WRF simulations”. Please provide the version of Noah in LIS that was used in this study as well as evidence showing it’s similar to Noah embedded in WRF3.8.1. Also, this statement cannot be agreed if the static inputs (land use/land cover, soil type, terrain, etc) of the land model are consistent in LIS/Noah and WRF3.8.1. LIS and WRF3.8.1 static inputs are generated from LDT and WPS tools, respectively. Please clarify what exactly has been done. This study area has complex surface type (not only land vs water, but also for land, urban vs non-urban categories), so some discussions on how these surface characteristics are represented by the model would be very informative.
- The evaluation of the model runs are not rigorous and not well connected with Pierce et al. The uncertainty of VIIRS GVF is not introduced in the paper - this product sounds to have short latency but for retrospective analysis like it’d be important to tell its quality on a daily time scale for this region relevant to air pollution events studied. Similar comment on GLSEA. The SPoRT LIS product is not discussed clearly (it is not very clear whether land data assimilation is enabled in the LIS system and if so, some data assimilation diagnostics could be shown) - my understanding is that SPoRT hosts documentations and visualizations of these routinely generated products elsewhere which may be cited in the paper. In terms of WRF model evaluation, some statistics and maps are presented but only for a limited number of variables, and as the authors noted at L232-233, “these surface observations were also used to perform surface nudging during the EPA simulation, which will impact the results presented in Section 3 because surface nudging was not used during any of the YNT simulations”. As the model outputs served as meteorological input of CMAQ, a list of variables central to pollutants to be studied should be selected with justifications, followed by model performance of them. The performance could be discussed in connection with the air pollution events and time series presented in Pierce et al., and additional evaluation metrics such as correlations between modeled and observed time series may be added. Furthermore, are there really no in-situ flux measurements/PBL info across the entire three WRF domains as stated at L402?
- More justifications on the design of model configurations should be added: although the focus of the study is on land initialization/model and nudging, the selections of all the other physics, ICs/BCs, and the distributions of vertical layers (40 layers, is this fine enough?) to study this area should be justified, particualrly, are the Noah-related setups based on literature or recommendations from any of their partnering local agency? Also, some extended discussions on ACM2 PBL scheme vs YSU scheme and how they affect the different model runs and conclusions would be very helpful.
Minor comments:
Pleim-Xu should be Pleim-Xiu throughout the paper
Table 1: IC/LC should be IC/BC
Using SST as the short form of lake surface temperature is a little confusing
The authors defined soil moisture/soil temperature as SOIL but still use soil moisture and (/) soil temperature in multiple places
I think using “evaluation” instead of analysis in many places of this paper would be less confusing
Abstract is very descriptive and specific to this modeling experiment, rather than delivering messages that could impact a broader audience.
L208: spell out NLDAS-2
Units of Figure 2 differences plot are missing. Text in Figures 7-9 are small.
Citation: https://doi.org/10.5194/egusphere-2023-153-RC2 - AC2: 'Reply on RC2', Jason Otkin, 13 Jun 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
235 | 76 | 14 | 325 | 1 | 1 |
- HTML: 235
- PDF: 76
- XML: 14
- Total: 325
- BibTeX: 1
- EndNote: 1
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
1 citations as recorded by crossref.
Lee M. Cronce
Jonathan L. Case
R. Bradley Pierce
Monica Harkey
Allen Lenzen
David S. Henderson
Zac Adelman
Tsengel Nergui
Christopher R. Hain
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(3446 KB) - Metadata XML