the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Influence of Temperature and Humidity on Contrail Formation Regions in EMAC: A Spring Case Study
Abstract. While carbon dioxide emissions from aviation often dominate climate change discussions, the significant impact of non-CO2 effects like contrails and contrail-cirrus must not be overlooked, particularly for the mitigation of climate effects. This study evaluates key atmospheric parameters influencing contrail formation, specifically temperature and humidity, using various model setups of a general circulation model (GCM) with different vertical resolutions and two nudging methods for specified dynamics setups. Comparing simulation results with reanalysis data for March and April 2014 reveals a systematic cold bias in mean temperatures across all altitudes and latitudes, particularly in the mid-latitudes where the bias is about 3–5 K, unless mean temperature nudging is applied. In the upper-troposphere/lower stratosphere, the humidity of the nudged GCM simulations shows a wet bias, while a dry bias is observed at lower altitudes. These biases result in overestimated regions for contrail formation in GCM simulations compared to reanalysis data. A point-by-point comparison along flown trajectories with measurement aircraft data shows similar biases. Exploring relative humidity over ice (RHice) threshold values for identifying ice-supersaturation regions provides insights into the risks of false alarms for contrail formation, together with information on hit rates. Accepting a false alarm rate of 16 % results in a hit rate of about 40 % (RHice threshold 99 %), while aiming for an 80 % hit rate increases the false alarm rate to at least 35 % (RHice threshold 91–94 %). A comprehensive one-day case study, involving aircraft-based observations and satellite data, confirms contrail detection in regions identified as potential contrail coverage areas by the GCM.
- Preprint
(2251 KB) - Metadata XML
-
Supplement
(4168 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-2142', Anonymous Referee #2, 07 Oct 2024
General Comments:This study evaluates the EMAC model's ability to identify contrails formation regions. This is beneficial to setup operational mitigation options that avoid the climate sensitive regions. However, to some extent, the experiment result analysis is redundant and does not focus on the main conclusion. Some analyses can be removed. Furthermore, the overall presentation (including Figures and Tables) should be improved (see following Specific Comments). More importantly, merely providing a direct introduction to the simulation results can not meet the requirements of this journal. The current conclusions (e.g., a certain model setup shows the best agreement with observations) are only suitable to the EMAC mode version in this study. Deep analyses (e.g., physical mechanisms) are needed. The valuable conclusions must have generalization ability (e.g., mechanism analysis).
Specific Comments:Title: "EMAC" is not a well-known abbreviation. What is "EMAC" short for? This should be pointed out in the abstract.
Abstract: It is necessary to point out which GCM was used in this study.
Abstract: "using various model setups of a general circulation model (GCM) with different vertical resolutions". Since different resolutions are mentioned, the impact of resolution should be introduced.
Abstract: "Accepting a false alarm rate of 16% results in a hit rate of about 40% (RHice threshold 99%), while aiming for an 80% hit rate increases the false alarm rate to at least 35% (RHice threshold 91-94%)" is very difficult to understand before reading the whole paper.
Abstract: Right now, the conclusions from comparing simulation results with "reanalysis data for March and April 2014", "along flown trajectories with measurement aircraft data" and "a comprehensive one-day case" are separate. I think it is better to provide comprehensive conclusions rather than separate conclusions.
Abstract: "confirms contrail detection in regions identified as potential contrail coverage areas by the GCM" is difficult to understand.
Abstract: At the last of abstract, it is better to give one useful suggestion based on the above evaluations.
Line 24~25: Is only the EMAC model utilised to calculate climate change functions (CCFs)?
Line 30: Why must "a consistent modelling framework"?
Line 31: What is the reason for "validation opportunities are limited"? Please give background introduction.
Line 45~48: These indicate that "temperature bias" is induced by "humidity bias (numerical diffusion of water vapour)". Am I right?
Line 49: Since "temperature bias" is induced by "humidity bias", why nudge "temperature" rather than "humidity"?
Line 53~56: It is better to point out the purpose of these comparisons and evaluations.
Line 118: Is potential vorticity (PV) usually used for seperating tropospheric and stratospheric? Why not separate tropospheric and stratospheric based on their definitions?
Line 143~144: "This fraction is calculated as the difference between the potential coverage of contrails and cirrus clouds combined and the coverage of natural cirrus clouds alone." is difficult to understand. I still do not know how to calculate the fraction of contrail cirrus coverage. What is the difference between "the fraction of contrail cirrus coverage" and "the fraction of contrail cirrus coverage"? How to consider the sub-grid-scale variability in RHice?
Line 178: What is the difference in the definition between cirrus and contrail cirrus? How to separate them?
Line 197~203: Here, only the treatment of the difference in time interval is introduced. How about the spatial difference between ML-CIRRUS and model results?
Line 206: Why only 26 March 2014? The ML-CIRRUS campaign from March 10 to April 16 2014.
Line 235~248: Compared to STN setups, the temperature bias from MTN setup is almost negligible. In other words, the difference between STN and MTN is obvious. The reason for this difference must be clearly illustrated.
Figure 3: It is necessary to show the probability density from ERA5 data (i.e., benchmark). Furthermore, I think it is better to use line chart rather than bar chart (line chart is clearer than bar chart, especially for the difference among different EMAC model setups).
Line 250~257: For me, it is very dificult to see these conclusions from Figure 3.
Line 271: "For MTN simulations, a consistent wet bias is observed at higher altitudes (150 hPa to 250 hPa)". What is the reason?
Line 309: "the area-weighted mean potential contrail coverage (AWM PotCov) for grid boxes with a potential contrail coverage above zero" is difficult to understand.
Line 331~338: The ML-CIRRUS Campaign provides in-situ measurements (i.e., a trajectory line), while the model results along trajectories denote the model grid-mean variables (i.e., a large area). How to consider the difference between "in-situ measurements" and "model grid-mean"?
Line 340: How to calculate the correlation? Is the correlation based on time series or spatial distribution? Why not compare temperature data directly based on the trajectory map?
Line 349~351: "The differences between the MTN simulation results and the measurements can mainly be attributed to the resampling of the observation data and the uncertainties during the measurement process". I can not understand this reason. Please give a more detail explanation.
Figure 5: the Figure S5a and S5b can be included in Figure5. Some other figures can also be improved in this way.
Line 355~365: The relations between Figure 6 and Figure 2 should be mentioned.
Line 373: Table 3 should list comparisons based on the model results of the some other threshold (e.g., 85%,95%,and 100%).
Line 379: "The Equitable Threat Score (ETS) characterizes the agreement between the data sets, considering hits, misses, and false alarms while adjusting for random chance" is difficult to understand.
Line 404~406: It seems that the selected case (i.e.,26 March 2014) is not common case. In my opinion, it is better to choose a representative case.
Line 409: "the methodology of REACT4C" should be introduced.
Line 410: "Geopotential height anomalies at 250 hPa indicate minimal correlation with both, the North Atlantic oscillation index, and the East Atlantic index". What supports this conclusion?
Line 423~424: The word "parameter" might lead to misunderstanding (e.g., the tunable parameters in model code).
Line 440: "during the third dive, the observed RHice are lower and the model-predicted RHice values exceed the observational data in parts by roughly 20%. This suggests that the simulated gradient is not steep enough, as the flight is further away from the tropopause than in the dives before" is difficult to understand.
Line 465: If possible, please show air traffic density in Figure 9. At least, show the airports and flights in Figure 9.
Line 498: What is anti-ice (AI) correction?
Line 553: "However, since our study focuses solely on conditions for contrail formation, we use the 100% threshold for the observation data." I can not understand the logical relationship.
Line 564~565: "This alignment may be attributed to the atmospheric conditions not being at the extreme edges, suggesting that differences in atmospheric conditions between models and observations do not significantly impact the results in this case." is difficult to understand.
Discussion: This study evaluates the EMAC model's ability to identify climate-sensitive regions. However, the final demand is to enable their forecast within numerical weather prediction models. Can the global mean temperature nudging technique be used for model prediction simulation? Which conclusions from this study are still useful for numerical weather prediction models? It is necessary to discuss this issue.
Line 568: This study did not evaluate the radiative effects of contrails.
Line 576: "Future research should aim to reduce the model bias, potentially by transitioning from the ECHAM to the ICON core model". What supports this conclusion?
Line 579~581: Can the simulation results from only one model (i.e., this study) support these conclusions (i.e., GCMs)?
Line 581~583: "The significant influence of temperature and humidity biases on contrail formation underscores the importance of considering model uncertainties when using climate-sensitive areas for contrail avoidance. Extending the existing Climate Change Functions to different seasons and regions is feasible and recommended for future studies". I can not understand the logical relationship.
Technical Corrections:Line 29: "climate change functions" should be replaced by "CCFs".
There are too many abbreviations in this study. Many of them are unnecessary.
Line 419: “Figure 9” appeared earlier than “Figure 8”.
Citation: https://doi.org/10.5194/egusphere-2024-2142-RC1 -
RC2: 'Comment on egusphere-2024-2142', Anonymous Referee #1, 07 Oct 2024
This study tests the ability for EMAC to accurately model contrail occurrence based on meteorological conditions. The evaluations are extensive and appear well-conducted despite mixed results. The purpose of this study is currently quite technical and I request the text be made clearer with a key result explained in more detail, though with some changes I expect the study will be suitable for ACP.
General Comments
The skill values (ETS) in Table 3 are very low. The presentation of results here seems appropriate, but surely there is more behind the low skill than the stated reason that this “might be due to the small number of data points” (Line 399). This lack of skill is also possibly suggestive of limits on the methods of this study. For instance, there may be improvement if the same test were applied with RH values from ERA5 rather than EMAC (zero bias by the criteria in Figs. 2-4), but presumably ERA5 also has some RH bias given a dearth of direct observations at these high altitudes. Could the authors please add to their explanation, and also please discuss future avenues for obtaining more accurate contrail estimation in such a comparison?
The study is presented according to a quite technical purpose (assessing two parameters of contrail occurrence in this specific model), but if there are more general insights to convey, this would bolster the study’s suitability for this scientific journal. All the results here currently seem specific to use of the EMAC model. This prompts questions, e.g. are similar biases apparent in other models? Are there conclusions to take from this work that are not model-specific? I’m wondering, for instance, if the low skill scores mentioned above speak generally to a limited current ability to accurately model contrail occurrence?
Specific Comments
Line 1: more of an ‘uncertain’ than “significant” impact of contrails, as some estimates indicate a negligible forcing but they are overall varied. C.f. Bier & Burkhard 2022, Bock & Burkhardt 2016 (0.056 Wm/2), Chen & Gettelman 2013 (0.013 W/m2), Rap et al 2010 (<0.01 W/m2)
Bier, A., & Burkhardt, U. (2022). Impact of parametrizing microphysical processes in the jet and vortex phase on contrail cirrus properties and radiative forcing. Journal of Geophysical Research: Atmospheres, 127(23), e2022JD036677.
Bock, L., & Burkhardt, U. (2016). Reassessing properties and radiative forcing of contrail cirrus using a climate model. Journal of Geophysical Research: Atmospheres, 121(16), 9717-9736.
Chen, C. C., & Gettelman, A. (2013). Simulated radiative forcing from contrails and contrail cirrus. Atmospheric Chemistry and Physics, 13(24), 12525-12536.
Rap, A., Forster, P. M., Jones, A., Boucher, O., Haywood, J. M., Bellouin, N., & De Leon, R. R. (2010). Parameterization of contrails in the UK Met Office climate model. Journal of Geophysical Research: Atmospheres, 115(D10).
Line 18: This would be read more cleanly if it just focused on the role of contrails, rather than as a component of “non-CO2 effects”
Lines 25-32: Can the relevance of the CCFs to this particular study please be explained here? Is the connection that this study is testing reliability of the model for future studies using the CCF approach?
Line 32: Isn’t NOx emission on ozone quite outside the focus of this study?
Lines 95-7: Why no unnudged version? The model presumably on its own has different biases than the nudged version. Do the authors know if the the RH biases would be worse without the nudging, or if the T bias is affected by (or is largely a result of) the nudging?
Line 104-126: I’d expected this model description would focus more on contrails. Can it be mentioned here that the contrail submodels will be further discussed in Section 2.2.? Also, can the relevance of the described methane tracer and CH4 sub-model please be made clear upfront? These are both relevant because they are factors for stratospheric water vapor, right?
Lines 101-2: By “global mean temperature”, is this nudging performed independently at each model level? Should be specified.
Lines 107 and 129: Can the Materials and Methods section please be edited to make clear the relationships among CONTRAIL, PotCov (which is not currently mentioned here), SAC, and ISSR. Also, S4D seems to be quite separate, so I find the references to “two sub-models” and “two approaches” for separate pairs quite confusing, given how the second of each list is the same but not the first, if I correctly understand that CONTRAIL is where SAC is calculated.
Lines 125-6: Isn’t the NAFC interesting for more prominent reasons, e.g. a sizable share of global flights and contrail occurrence?
Lines 129-30: Could this explanation of the two contrail formation approaches please specify upfront how these are similar and differ? The ice microphysical criteria appear overall similar, but the Schmidt-Appleman approach factors in jet engine characteristics, if I understand this correctly. Would a reasonable interpretation be that the second approach is more advanced and expected to be more accurate?
Lines 144-6: What jet characteristics were factored into CONTRAILS? The cited Schumann 2000 paper mentions a number of propulsion efficiencies. Also, can the authors please comment on whether they suspect the results of this study would be quite different if alternative parameters had been tested?
Lines 216-7: Is there also shape criteria here? How are contrails distinguished from other clouds?
Lines 291-2: ERA5 does not calculate contrail occurrence, right? For another model to have “exhibited a larger contrail formation region compared to ERA5” suggests otherwise.
Lines 307-9: Is “PotCov” simply a direct result of the Schmidt-Appleman criteria described in section 2.2? If so, it should be named in Section 2.2 and not first mentioned here.
Lines 354: Do these “low humidity values” matter? These would never form contrails, presumably, so it’s confusing for this to be commented on without this clarification.
Lines 368-9: Is it correct to say that the model results are being used to test for ice-supersaturated regions (ISSRs) as described in Section 2.2? I find the current description of “we focus on ice-supersaturated conditions, where the ambient relative humidity with respect to ice (RHice) exceeds 100%” not sufficiently clear how this connects to the methods described earlier.
Line 372: I don’t understand the word “gradually” here. Is the threshold actually evolving over time, or this is meant to convey “moderately”?
Fig. 8: In the lowest panel (PotCov) there are substantial differences between simulations. Can the authors please comment on whether they think this is due to the resolution differences being important or is simply noise?
Fig. 9 caption and Line 466: Two instances of “L31T63” that should be “T63L31".
Lines 476-497: The first 20 or so lines of the Discussion are really Summary. This looks to me more appropriate for the “Summary and Conclusions” section that follows.
Line 550: I don’t see any reference to a higher ETS score to back this statement. Is this based on some analysis that is not shown in the study? This should be stated in the text.
Lines 552-5: I don’t see any reference to the SAC approach in the test for skill (last sub-section of Section 4), which seems to be based entirely on the ISSR approach, so I find this statement confusing.
Citation: https://doi.org/10.5194/egusphere-2024-2142-RC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
166 | 44 | 62 | 272 | 12 | 2 | 6 |
- HTML: 166
- PDF: 44
- XML: 62
- Total: 272
- Supplement: 12
- BibTeX: 2
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1