the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Parametric Sensitivity and Constraint of Contrail Cirrus Radiative Forcing in the Atmospheric Component of CNRM-CM6-1
Abstract. The impact of aviation on climate change due to CO2 emissions no longer needs to be demonstrated. However, the impact of non-CO2 effects such as those from contrails is still subject to large uncertainties. An often neglected source of uncertainty comes from climate model sensitivity to numerical parameters representing subgrid-scale processes. Here we investigate the sensitivity of contrail radiative forcing due parametric uncertainty based on the atmospheric component of the CNRM-CM6-1 coupled model. A perturbed parameter ensemble is generated from the sampling of twenty-two adjustable parameters involved in convection, cloud microphysics and radiative transfer processes. A surrogate model based on multi-linear regression is used to explore the full range of contrail radiative forcing due to parametric uncertainty. Based on an optimization algorithm and a climatological skill score, we find a constrained range of contrail radiative forcing from equally skillful model versions with different sets of parameters. We find a contrail radiative forcing best-estimate of 56 mW.m-2 with a 5–95 % confidence interval of 38–70 mW.m-2. Finally, a sensitivity analysis shows that model parameters controlling contrail's lifetime play a major role in the estimation of contrail radiative forcing.
- Preprint
(3541 KB) - Metadata XML
-
Supplement
(1758 KB) - BibTeX
- EndNote
Status: closed
-
CC1: 'Comment on egusphere-2023-2478', Sidiki Sanogo, 21 Nov 2023
Dear Maxime et al.,
You have led a very nice analysis. Congratulations. However, I would like to comment on the analysis presented in figure 4:
You have compared the RHI distributions of ARPEGE, 1°x1° with the RHI distribution of the IAGOS data. IAGOS data are very local measurements, since they are measured every 4s. How do you justify this ? Do you think that these two distributions are comparable ?
Thank you.
Citation: https://doi.org/10.5194/egusphere-2023-2478-CC1 - AC1: 'Reply on RC1-RC2-CC1', Maxime Perini, 19 Feb 2024
-
RC1: 'Review of egusphere-2023-2478', Anonymous Referee #1, 01 Dec 2023
Review of "Parametric Sensitivity and Constraint of Contrail Cirrus Radiative Forcing in the Atmospheric Component of CNRM-CM6-1" by Perini et al.
This manuscript does two things. First it describes a contrail parameterization, and second it runs Perturbed Parameter Ensemble (PPE) with it to vary cloud parameters and see how they affect the contrail forcing. This paper is very confusing. It might be publishable with major revisions, or it might more usefully be two separate papers. Each part needs more description and detail as noted below.
Major comments;
1. The contrail parameterization is not described sufficiently, and there does not seem to be any uncertainty analysis of the different terms in it.
2. The simulations and methods for estimating radiative forcing are not described. How are the simulations run?
3. The contrail parameterization simply does not seem physical. Please show how it relates to the Schmidt Appleman criteria. Given a temperature and humidity, what contrail fraction is predicted? It is impossible to assess the parameterization otherwise.
4. Numbers are thrown around in the abstract and introduction that are not comparable (e.g. different dates), this needs to be cleaned up.
5. I’m not sure what your RF represents since you do not describe the simulations or the method to estimate radiative forcing. How is the radiative forcing estimated? Is it Effective Radiative Forcing (ERF) including an adjustment in the model or RF?
6. I am very confused regarding how the emulator is built. The use of EOFs is pretty novel and interesting, but that means it needs more description and examples: I cannot seem to understand what you are doing, or how the results are derived.
7. I am concerned that all you have tested is the sensitivity to the background state of the model and evolution of clouds. It’s not clear to me that you have really looked at the uncertainty of the contrails themselves (e.g. P and Lt in equation 1). See point 3.
8. Finally, a minor point: The English grammar could use a read through from a native speaker if possible, there are a number of mistakes, particularly the use of plurals. I sympathize, English is a really horrible language for this.
Specific Comments
Page 1, L10 Are you simulating an RF or an ERF? They can differ by a factor of 2….if there is adjustment, then it’s an ERF I believe?Page 1, L10: you should note that this estimate is for the year 2000. It is confusing for the reader.
Page 1, L17: You should state the 2000 or 2005 estimate from Lee et al 2021, otherwise the reader thinks you have a vast difference (also need to note year 2000 in abstract).
Page 3, L68: Is cloud fraction in the scheme prognostic? I.e. da/dt will increase it?
Page 3, L69: is Qsat for liquid or ice? Or both?
Page 3, L70: Grammar. “If there are no ice nuclei”
Page 3, L70: does this mean activated ice nuclei?
Page 3, L70: Suggest ice nuclei not be x since I think that used in equation 1 as distance, and it is confusing
Page 3, L74: I don’t understand what is happening. What is the difference between a and a*? The statement “in this case the contrail cirrus parameterization is activated whereas in every other case…” Is a = contrail coverage and a* = all clouds?
Page 3, L79: Can you illustrate what the contrail coverage tendency is as a function of say temperature and humidity? How does that compare to the Schmidt Appleman criteria?
Page 3, L81: Saturated with respect to what? Ice?
Page 5, L93, Figure 2: Where is the Southern Hemisphere in the figure (right or left, use -lat for axis labels)? Also: please put some uncertainty bars on this based on internal variability. How long are the runs? Are these statistically different?
How do you get different cloud fraction (singular cloud fraction is better English usage), in both hemispheres? Where are the contrails? Or is that embedded in the difference? Please explain. You are adding 2 things at once.
Page 5, Fig 3: Great to see this. Can you add zonal mean latitude height panels of the ice supersaturation frequency from the model and AIRS.
Page 5, L105: again, show lat-height in Figure 3 as well.
Page 7, L117: what is the definition of contrails? Is this just da/dt per timestep from the parameterization? It’s not contrail cirrus since it exactly matches the emissions locations.
Page 7, L117: maximum contrail coverage (singular I think is appropriate here, no of).
Page 7, L121: how is the net radiative effect calculated? Is it a difference between two runs? Or is it an off line calculation. You need to describe this more completely. Is this just a single number? What are the optical properties of a contrail? There needs to be much more detail here.
Page 7, L127: You should be clear that it is not the contrails that improve the clouds, it is the ice supersaturation used for cirrus clouds. Maybe you can do a run without contrails but with the new parameterization? Or have you done that in figure 5 already? Please clarify.
Page 8, L145: how do five modes yield 3 variance values? And shouldn’t all the EOFs add to 100%? Also, are these 2D (lat Lon) or 3D (lat, Lon, time) fields? I.e. radiative fluxes and precipitation are for each level or just surface (precip) and top of atmosphere (rad fluxes) as is more common?
Page 8, L146: I don’t really follow this, since the method of using EOFs is not that common. Can you show an detailed example to help explain this for the reader?
Page 9, L160: I thought the PCs were projections of ensemble members or something? How many EOFs are being used? Are they fixed? Again, I’m not following what is going on here. What does predicting the first 5 PCs get you?
Page 9, L164: Are you predicting the value at each location or the global average? LW, SW or total?
Page 10, L177: are you predicting the global mean value? Does this assume there is a unique set of parameters to get any contrail RF? What if the same RF is produced by a different set of parameters?
Page 10, L181: Again, I’m not following here. You have different parameters for each RFi bin? But if you have the lowest Etot doesn’t that just specify the parameters and a given contrail RF for those parameters?
Page 11, L198, Fig 6: I don’t understand how the optimized calibration can basically be out of sample of all the points. I obviously do not understand what you are doing.
Page 11, L203: Temperature yes, but the radiative fluxes appear near the middle of the the distributions (vertical axis of figure 6 right?).
Page 12, L215: I do not follow this multiplying by two. Where did that come from?
Page 12, L217: again, I don’t follow this conclusion.
Page 13, L222: the model tends to shift towards…
Page 13, L224: But a higher error score is bad right? Or good? I was reading from Figure 6 it was a minimization problem.
Page 14, L225: But this also means the emulator is not very effective at reproducing the results of the candidates, particularly with respect to contrail RF.
Page 14, L234: note which years are being assessed.
Page 14, L242: what about varying the input assumptions to the contrail parameterization? You have tested the sensitivity of overall clouds and contrails to parameters, which is interesting, but your contrail parameterization is very crude and not especially physical, what about testing the parameters in that?
Page 14, L255: but you have just tested part of the uncertainty, you have not varied the contrail parameterization. You have just tested its sensitivity to the overall environmental state.
Page 15, L264: I think you need to do more analysis of the contrail parameterization uncertainty.
Citation: https://doi.org/10.5194/egusphere-2023-2478-RC1 - AC1: 'Reply on RC1-RC2-CC1', Maxime Perini, 19 Feb 2024
-
RC2: 'Comment on egusphere-2023-2478', Anonymous Referee #2, 07 Dec 2023
The stated aims of this manuscript is to: (i) describe a new ice-supersaturation/contrail parameterisation in the ARPEGE model; (ii) evaluate the model parameterization against observations; and (iii) explore the parametric dependence of contrail radiative forcing to different calibrations of ARPEGE-Climat. Upon reading the manuscript, I would recommend the manuscript to be rejected on the following grounds:
- The literature review in the Introduction is incomplete. For example, it does not provide a review of the contrail models that are currently available. It does not outline, describe, and review the main sources of uncertainty influencing the contrail climate forcing. Most importantly, it also does not provide a clear motivation to perform the stated research objectives. What is the novelty of this research, and how does it address the existing research gap?
- The proposed contrail cirrus parameterisation, i.e., Eq. (1), is very difficult to understand. The derivation, assumptions, and limitations of Eq. (1) cannot be found in the manuscript, thereby hindering the review process.
- There are major discrepancies between the model and observations (see Fig. 2, Fig. 3, and Fig. 4) which the authors brushed off as "reasonable agreement" (Line 101) and "better agreement" (Line 115). What does "reasonable" and "better" mean? Please quantify these statements with statistical metrics.
- The manuscript then proceeds to develop a surrogate model on the grounds of computational efficiency, and calibrates it using the ARPEGE parameters. Given the significant discrepancies between the ARPEGE model and observations, as mentioned in the previous point, the question arises concerning the practical utility of these obtained results.
- The authors compared the simulated precipitation from the model with observations, i.e., Table 1, which makes no logical sense. How does precipitation influence contrail formation and its associated properties?
- The quality of Figures 6 and 8 are both very poor and unreadable.
While there are also numerous minor comments that merit consideration, it might be more judicious to prioritise addressing the major comments as outlined above before delving into these minor comments. In light of these substantial concerns, I suggest the paper be rejected outright.
Citation: https://doi.org/10.5194/egusphere-2023-2478-RC2 - AC1: 'Reply on RC1-RC2-CC1', Maxime Perini, 19 Feb 2024
Status: closed
-
CC1: 'Comment on egusphere-2023-2478', Sidiki Sanogo, 21 Nov 2023
Dear Maxime et al.,
You have led a very nice analysis. Congratulations. However, I would like to comment on the analysis presented in figure 4:
You have compared the RHI distributions of ARPEGE, 1°x1° with the RHI distribution of the IAGOS data. IAGOS data are very local measurements, since they are measured every 4s. How do you justify this ? Do you think that these two distributions are comparable ?
Thank you.
Citation: https://doi.org/10.5194/egusphere-2023-2478-CC1 - AC1: 'Reply on RC1-RC2-CC1', Maxime Perini, 19 Feb 2024
-
RC1: 'Review of egusphere-2023-2478', Anonymous Referee #1, 01 Dec 2023
Review of "Parametric Sensitivity and Constraint of Contrail Cirrus Radiative Forcing in the Atmospheric Component of CNRM-CM6-1" by Perini et al.
This manuscript does two things. First it describes a contrail parameterization, and second it runs Perturbed Parameter Ensemble (PPE) with it to vary cloud parameters and see how they affect the contrail forcing. This paper is very confusing. It might be publishable with major revisions, or it might more usefully be two separate papers. Each part needs more description and detail as noted below.
Major comments;
1. The contrail parameterization is not described sufficiently, and there does not seem to be any uncertainty analysis of the different terms in it.
2. The simulations and methods for estimating radiative forcing are not described. How are the simulations run?
3. The contrail parameterization simply does not seem physical. Please show how it relates to the Schmidt Appleman criteria. Given a temperature and humidity, what contrail fraction is predicted? It is impossible to assess the parameterization otherwise.
4. Numbers are thrown around in the abstract and introduction that are not comparable (e.g. different dates), this needs to be cleaned up.
5. I’m not sure what your RF represents since you do not describe the simulations or the method to estimate radiative forcing. How is the radiative forcing estimated? Is it Effective Radiative Forcing (ERF) including an adjustment in the model or RF?
6. I am very confused regarding how the emulator is built. The use of EOFs is pretty novel and interesting, but that means it needs more description and examples: I cannot seem to understand what you are doing, or how the results are derived.
7. I am concerned that all you have tested is the sensitivity to the background state of the model and evolution of clouds. It’s not clear to me that you have really looked at the uncertainty of the contrails themselves (e.g. P and Lt in equation 1). See point 3.
8. Finally, a minor point: The English grammar could use a read through from a native speaker if possible, there are a number of mistakes, particularly the use of plurals. I sympathize, English is a really horrible language for this.
Specific Comments
Page 1, L10 Are you simulating an RF or an ERF? They can differ by a factor of 2….if there is adjustment, then it’s an ERF I believe?Page 1, L10: you should note that this estimate is for the year 2000. It is confusing for the reader.
Page 1, L17: You should state the 2000 or 2005 estimate from Lee et al 2021, otherwise the reader thinks you have a vast difference (also need to note year 2000 in abstract).
Page 3, L68: Is cloud fraction in the scheme prognostic? I.e. da/dt will increase it?
Page 3, L69: is Qsat for liquid or ice? Or both?
Page 3, L70: Grammar. “If there are no ice nuclei”
Page 3, L70: does this mean activated ice nuclei?
Page 3, L70: Suggest ice nuclei not be x since I think that used in equation 1 as distance, and it is confusing
Page 3, L74: I don’t understand what is happening. What is the difference between a and a*? The statement “in this case the contrail cirrus parameterization is activated whereas in every other case…” Is a = contrail coverage and a* = all clouds?
Page 3, L79: Can you illustrate what the contrail coverage tendency is as a function of say temperature and humidity? How does that compare to the Schmidt Appleman criteria?
Page 3, L81: Saturated with respect to what? Ice?
Page 5, L93, Figure 2: Where is the Southern Hemisphere in the figure (right or left, use -lat for axis labels)? Also: please put some uncertainty bars on this based on internal variability. How long are the runs? Are these statistically different?
How do you get different cloud fraction (singular cloud fraction is better English usage), in both hemispheres? Where are the contrails? Or is that embedded in the difference? Please explain. You are adding 2 things at once.
Page 5, Fig 3: Great to see this. Can you add zonal mean latitude height panels of the ice supersaturation frequency from the model and AIRS.
Page 5, L105: again, show lat-height in Figure 3 as well.
Page 7, L117: what is the definition of contrails? Is this just da/dt per timestep from the parameterization? It’s not contrail cirrus since it exactly matches the emissions locations.
Page 7, L117: maximum contrail coverage (singular I think is appropriate here, no of).
Page 7, L121: how is the net radiative effect calculated? Is it a difference between two runs? Or is it an off line calculation. You need to describe this more completely. Is this just a single number? What are the optical properties of a contrail? There needs to be much more detail here.
Page 7, L127: You should be clear that it is not the contrails that improve the clouds, it is the ice supersaturation used for cirrus clouds. Maybe you can do a run without contrails but with the new parameterization? Or have you done that in figure 5 already? Please clarify.
Page 8, L145: how do five modes yield 3 variance values? And shouldn’t all the EOFs add to 100%? Also, are these 2D (lat Lon) or 3D (lat, Lon, time) fields? I.e. radiative fluxes and precipitation are for each level or just surface (precip) and top of atmosphere (rad fluxes) as is more common?
Page 8, L146: I don’t really follow this, since the method of using EOFs is not that common. Can you show an detailed example to help explain this for the reader?
Page 9, L160: I thought the PCs were projections of ensemble members or something? How many EOFs are being used? Are they fixed? Again, I’m not following what is going on here. What does predicting the first 5 PCs get you?
Page 9, L164: Are you predicting the value at each location or the global average? LW, SW or total?
Page 10, L177: are you predicting the global mean value? Does this assume there is a unique set of parameters to get any contrail RF? What if the same RF is produced by a different set of parameters?
Page 10, L181: Again, I’m not following here. You have different parameters for each RFi bin? But if you have the lowest Etot doesn’t that just specify the parameters and a given contrail RF for those parameters?
Page 11, L198, Fig 6: I don’t understand how the optimized calibration can basically be out of sample of all the points. I obviously do not understand what you are doing.
Page 11, L203: Temperature yes, but the radiative fluxes appear near the middle of the the distributions (vertical axis of figure 6 right?).
Page 12, L215: I do not follow this multiplying by two. Where did that come from?
Page 12, L217: again, I don’t follow this conclusion.
Page 13, L222: the model tends to shift towards…
Page 13, L224: But a higher error score is bad right? Or good? I was reading from Figure 6 it was a minimization problem.
Page 14, L225: But this also means the emulator is not very effective at reproducing the results of the candidates, particularly with respect to contrail RF.
Page 14, L234: note which years are being assessed.
Page 14, L242: what about varying the input assumptions to the contrail parameterization? You have tested the sensitivity of overall clouds and contrails to parameters, which is interesting, but your contrail parameterization is very crude and not especially physical, what about testing the parameters in that?
Page 14, L255: but you have just tested part of the uncertainty, you have not varied the contrail parameterization. You have just tested its sensitivity to the overall environmental state.
Page 15, L264: I think you need to do more analysis of the contrail parameterization uncertainty.
Citation: https://doi.org/10.5194/egusphere-2023-2478-RC1 - AC1: 'Reply on RC1-RC2-CC1', Maxime Perini, 19 Feb 2024
-
RC2: 'Comment on egusphere-2023-2478', Anonymous Referee #2, 07 Dec 2023
The stated aims of this manuscript is to: (i) describe a new ice-supersaturation/contrail parameterisation in the ARPEGE model; (ii) evaluate the model parameterization against observations; and (iii) explore the parametric dependence of contrail radiative forcing to different calibrations of ARPEGE-Climat. Upon reading the manuscript, I would recommend the manuscript to be rejected on the following grounds:
- The literature review in the Introduction is incomplete. For example, it does not provide a review of the contrail models that are currently available. It does not outline, describe, and review the main sources of uncertainty influencing the contrail climate forcing. Most importantly, it also does not provide a clear motivation to perform the stated research objectives. What is the novelty of this research, and how does it address the existing research gap?
- The proposed contrail cirrus parameterisation, i.e., Eq. (1), is very difficult to understand. The derivation, assumptions, and limitations of Eq. (1) cannot be found in the manuscript, thereby hindering the review process.
- There are major discrepancies between the model and observations (see Fig. 2, Fig. 3, and Fig. 4) which the authors brushed off as "reasonable agreement" (Line 101) and "better agreement" (Line 115). What does "reasonable" and "better" mean? Please quantify these statements with statistical metrics.
- The manuscript then proceeds to develop a surrogate model on the grounds of computational efficiency, and calibrates it using the ARPEGE parameters. Given the significant discrepancies between the ARPEGE model and observations, as mentioned in the previous point, the question arises concerning the practical utility of these obtained results.
- The authors compared the simulated precipitation from the model with observations, i.e., Table 1, which makes no logical sense. How does precipitation influence contrail formation and its associated properties?
- The quality of Figures 6 and 8 are both very poor and unreadable.
While there are also numerous minor comments that merit consideration, it might be more judicious to prioritise addressing the major comments as outlined above before delving into these minor comments. In light of these substantial concerns, I suggest the paper be rejected outright.
Citation: https://doi.org/10.5194/egusphere-2023-2478-RC2 - AC1: 'Reply on RC1-RC2-CC1', Maxime Perini, 19 Feb 2024
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
200 | 90 | 31 | 321 | 42 | 18 | 17 |
- HTML: 200
- PDF: 90
- XML: 31
- Total: 321
- Supplement: 42
- BibTeX: 18
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1