the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
How the representation of microphysical processes affects tropical condensate in a global storm-resolving model
Abstract. Cloud microphysics are a prime example of processes that remain unresolved in atmospheric modelling with storm-resolving resolution. In this study, we explore how uncertainties in the representation of microphysical processes affect the tropical condensate distribution in a global storm-resolving model. We use ICON in its global storm-resolving configuration, with a one- or a two-moment microphysical scheme and do several sensitivity runs, where we modify parameters of one hydrometeor category of the applied microphysics scheme. Differences between the one- and the two-moment scheme are most prominent in the partitioning of frozen condensate in cloud ice and snow, and can be ascribed to the habit's definition for each scheme which is associated with different process rates. Overall differences between the simulations are moderate and tend to be larger for individual condensate habits than for more integrated quantities, like cloud fraction or total condensate burden. Yet, the resulting spread in the tropical energy balance of several W m-2 at the top of the atmosphere and at the surface is substantial. Although the modified parameters within one scheme generally affect different process rates, most of the change in the condensate amount of the modified habit and even total condensate burden can be attributed to a single property, the change in fall speed. Tropical mean precipitation efficiency is also well explained by changes in the relative fall speed across different habits and both schemes.
- Preprint
(1575 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2024-2268', Anonymous Referee #1, 16 Aug 2024
Review of “How the representation of microphysical processes affects tropical condensate in a global storm-resolving model”
The authors examine the role that uncertainties in the representation of microphysical processes play for the tropical condensate distribution in a global storm-resolving model. I agree with most of the conclusions but there are some different opinions about the comparison between single-moment and double-moment schemes, as I will outline further below. Therefore, I cannot recommend the publication of the manuscript at this stage.
Major comments:
- I don't see any observations for comparison of model results. Thus, it is difficult to determine whether the modifications yield better results.
- I agree with the discussion on single-moment or double-moment ensembles, but the discussion of the difference between single-moment and double-moment does not convince me. Whether or not to prognose the number of concentrations does not seem to be the only difference between the two schemes. Changing the description from single-moment to double-moment may make a big difference even only for one hydrometeor (e.g. cloud droplet in Xu et al 2024). Meanwhile, the difference in a single microphysical process scheme may also lead to large differences in water content, cloud cover, and precipitation (e.g. Lee et al). Therefore, if the authors want to compare the differences between single-moment and double-moment schemes, they need to ensure that the descriptions for other microphysical processes are consistent, for example, single-moment Thompson and double-moment Thompson schemes (e.g. Hill et al. 2015).
Minor comments:
- It is not a good reading experience to put the sensitivity test settings in the appendix. I recommend adding a table in the text to explain the settings of different tests. There is no reference for some test modifications (e.g. 1mom-snow, 2mom-ice).
- I agree with the importance of fall speeds, but the authors should at least give the differences in the microphysical processes affected by the fall speeds under different tests and draw the conclusion that “other conversion rates between habits that are affected by the fall speed play a minor role”.
Hill, A. A., Shipway, B. J., & Boutle, I. A. 2015. How sensitive are aerosol‐precipitation interactions to the warm rain representation? Journal of Advances in Modeling Earth Systems, 7, 987–1004. https://doi.org/10.1002/2014MS000422
Lee, H., & Baik, J.‐J. 2017. A physically based autoconversion parameterization. Journal of the Atmospheric Sciences, 74(5), 1599–1616. https://doi.org/10.1175/jas-d-16-0207.1
Xu X, Heng Z, Li Y, Wang S, Li J, Wang Y, Chen J, Zhang P, Lu C. 2024. Improvement of cloud microphysical parameterization and its advantages in simulating precipitation along the Sichuan-Xizang Railway. Science China Earth Sciences, 67, https://doi.org/10.1007/s11430-023-1247-2
Citation: https://doi.org/10.5194/egusphere-2024-2268-RC1 -
RC2: 'Comment on egusphere-2024-2268', Anonymous Referee #2, 26 Aug 2024
The article describes a set of experiments where parameters describing the habits of different cloud particle and hydrometeor types are varied. Their effects are evaluated in terms of differences between a one and a two moment scheme and the variations employed within them. This is done for a storm resolving model configuration, where the authors see extra impetus for their study because at that scale the dynamic forcing of cloud microphysics is resolved (convection), so uncertainties in the cloud microphysics are more important. The authors find that most of their variations relate to the fall speed of the hydrometeors, which they show correlates with the changes in condensate that they observe. They thus single out the fall speed an important tuning nob for the whole scheme.
I think this study includes strong ideas, for example the above-mentioned highlight of the fall speed importance, or the discussion around the precipitation efficiency, and is generally presented well.I discuss additional points below.
### MajorTo me the major weakness of the study seems to be that the experimental setup is not rigorous. For an exploration of the parameter space and its associated uncertainties, other studies have used setups thoroughly sampling the space. See for example Regayre et al. (2018) and references therein for the use of perturbed parameter ensembles for that purpose.
This begs the question why you chose to sample only single parameter values?Also, for the sensitivity simulations it is important which parameters you choose to vary, by how much and why.
E.g. l. 102: why are you focusing on the functional characterization of the habits?
l. 103-104: "what we deem a plausible range". How exactly did you pick that range? The information in Appendix A suggests e.g. comparison to what's been used in the literature. In addition, you are not sampling the range, but picking one value for each of the parameters (in addition to the default configuration). How did you decide on that value after determining your plausible range (as opposed to e.g. sampling the minimum and maximum value)?
l. 106: In explaining your choice it would help e.g. if you listed other parameters that could have been perturbed and that you decided against (because you think they'd have a smaller impact, as you say).
From Appendix A I gather that at least for some of the perturbations you had physical properties in mind, for example affecting the size sorting. In my understanding these are unconstrained properties of the hydrometeors that you deem as sources for uncertainty, because the choice for the corresponding values is underdetermined. I think you should highlight that more in the introduction to justify your choice of parameters. For example you could have a table for the different sensitivity simulations, with a column explaining the target of the sensitivity simulation (e.g. making the snow less dense and more distinct from graupel in 1mom-snow).
For some parameters your choice for the values is justified well (e.g. for the 1mom-rain distribution changing from gamma to Marshall and Palmer), but for others it is not well explained and thus seems random (e.g. for 1mom-ice you choose parameters from different publications, and for c not even the one from the publication but one that's closer to it; for 1mom-snow there is no justification at all for the specific value). Here you could elaborate your reasoning or state that it was chosen ad-hoc and what caveats for your interpretation of the results it brings with it.
l. 131: You state yourself that the parameter perturbations were conservatively chosen. Why do you infer that from the comparison to the one- to two-moment scheme difference (e.g. should that be smaller for physical reasons or do literature studies suggest that)? As stated above, I think you also need to make more clear why you choose the parameter values "conservatively", rather than e.g. using the minimum and maximum value of your plausible range.In general, in the introduction as well as in the discussion of the results, the study could benefit from more reference to literature on the impact of unconstrained choices in microphysics schemes. For example , the group of Adele Igel has produced a lot of interesting work on the topic, see e.g. Hu and Igel (2023).
In relation to such literature and to your particular study setup, the particular question you are trying to answer remains unclear for me. You are sparsely and conservatively sampling the parameter space, so it is not a thorough exploration of uncertainty. Your are not comparing between resolutions (side note: that would be interesting to add), so you cannot answer whether the change in resolution or in parameters/schemes produces more variance. You are not attempting to tune your model. Is it just a first test of cloud microphysical influences in the storm resolving ICON version? I would then advise to state that more clearly.
One key results is that you see that changes in condensate can be attributed mainly to a change in fall speed. However, since your perturbations are chosen in a peculiar way, one wonders how much this result is due to your setup? So does fall speed only come out as important because that is the property that you perturb the most?
Here it could help to add to your discussion in l. 111 - 115 with a description of which processes the habit parameters you change affect directly (for example with a graphic). Thus one could judge better which processes even have a chance to compete with the fall speed for importance with the perturbations.
Do you have a similar comparison as Fig. 4 also for the other processes that were perturbed?What do you suggest how we continue from here?
Your introduction opens up the considerations for choosing between one- and two-moment schemes. Can you add to those considerations?
(How) Should we go about exploring parameter uncertainties more thoroughly?
In l.108-110 you say that all simulations are equally likely. What does that range mean for interpreting single km-scale simulations as it is often done, or how could you further explore that spread?
Do you have ideas for how to constrain the fall speed and related parameters?l. 87 - 91: You are comparing the global mean differences between your sensitivity simulations to the global mean differences between different days of the simulation. I agree that this shows some robustness of the sensitivity simulation differences, but it's not clear to me that this is a reasonable metric for comparison. Why not compare to variability in space or shorter time intervals instead?
### Minor- l. 4-5: "where we modify parameters of one hydrometeor category of the applied microphysics scheme": add "in each" or so to clarify that each sensitivity simulation modifies one parameter in one category.
- l. 6: "and can be ascribed..." this second part of the sentence is unclear to me.
- l. 7: moderate compared to what?
- l. 19: "resolved away" sounds strange to me, rephrase perhaps?
- l. 22: give some examples of those past studies (at least with citations)
- l. 31: explain NICAM abbreviation
- l. 36: do you mean at km-scale horizontal resolution it's not so important which one one chooses (e.g. 10, 5, or 2 km)? Or going to km-scale resolution at all is less important compared to the schemes?
- l. 45: allow*s*
- l. 45-51: I value your explanation of the considerations going into using a two-moment scheme or not. You are weighing more flexibility and number information against complexity and computational burden and conclude that the advantage is unclear. But I don't see the connection to your study, where you are comparing results of the one- and two-moment scheme (and quote the additional computational burden). Thus I think this part of the introduction should explain why you have reason to believe that the two-moment scheme gives different/better results (taking up the point from above you could e.g. on past studies comparing results). Or else take these points up in your results and discussion more explicitly, elaborating what e.g. the additional number information has done, or how complexity got in the way, etc. Or maybe your standpoint is that one can justify either choice (for one- or two-moment scheme), and that therefore you consider it a source for uncertainty and treat both. All these alternative argumentation avenues may be implicit for you, but they are not clear for me as a reader.
- l. 53: You refer to other studies' findings for the tropics in the first paragraph of that page, but what exactly is your motivation for focusing on the tropics?
- l. 85: why restrict the analysis to 5 of the 10 days?
- l. 85: I think the Figure captions should still say explicitly that they are restricted to the tropics.
- l. 87: give some example citations
- l. 140: allow*s*
- l. 141-146: For a reader not knowing these schemes perhaps a graphic showing the processes included in each of them could help (in combination with how the habit parameters that you adjust impact these processes).
- l. 155: Are these results shown anywhere?
- l. 167: Is this a hypothesis or have you checked that e.g. with analysing process rates?
- l. 179: I don't see where you show that for condensate amount. On pg. 5 it's also only justified with Fig. 5 and C1, which relate to the energy balance.
- l. 190: Point to Fig. B1
- Fig. B1: caption d-f: shortwave
- l. 192: The wording here was confusing to me, e.g. "high-cloud fraction is just a little higher".
- l. 206: why is it "probably small"?
- l. 242: where is the information on the evaporation?
- l. 277: But you also say your perturbations are conservative. So the spread would be underestimated overall?
- l. 281: What do you mean by "common"?
- Figure legends throughout: Why not order the legend by 2mom and then 1mom so that e.g. in Fig. 5 the colours line up? Also, I find the symbols hard to distinguish. Could you use something more different than circle and diamond?
### ReferencesHu, Arthur Z., and Adele L. Igel. “A Bin and a Bulk Microphysics Scheme Can Be More Alike Than Two Bin Schemes.” Journal of Advances in Modeling Earth Systems 15, no. 3 (March 2023): e2022MS003303. https://doi.org/10.1029/2022MS003303.
Regayre, Leighton A., Jill S. Johnson, Masaru Yoshioka, Kirsty J. Pringle, David M. H. Sexton, Ben B. B. Booth, Lindsay A. Lee, Nicolas Bellouin, and Kenneth S. Carslaw. “Aerosol and Physical Atmosphere Model Parameters Are Both Important Sources of Uncertainty in Aerosol ERF.” Atmospheric Chemistry and Physics 18, no. 13 (July 13, 2018): 9975–10006. https://doi.org/10.5194/acp-18-9975-2018.
Citation: https://doi.org/10.5194/egusphere-2024-2268-RC2 - RC3: 'Comment on egusphere-2024-2268', Anonymous Referee #3, 27 Aug 2024
-
RC4: 'Comment on egusphere-2024-2268', Anonymous Referee #4, 27 Aug 2024
General Comments:
Overall, this is an interesting study that explores the sensitivity of global storm resolving models that resolve deep convection to the representation of microphysical processes. This research could be a valuable contribution to the literature as these types of sensitivity studies have not been explored in much detail yet. However, I share the concerns of the other reviewers that this study lacks rigor in terms of the choices made to compare the two different microphysics schemes and in terms of the parameter choices that were made. Given the different process representation in the one and two moment schemes and the lack of rigor in the variation of the parameter choices, it is hard to tell how rigorous the conclusion that the change in fall speeds is most significant for the representation of cloud microphysical processes. In addition to concerns about the methodology of the study and analysis of results, the presentation of the research throughout the paper could be significantly improved. The figures were not very well thought out and often quite difficult to interpret. Notation is often defined only in the figure caption and not fully explained, and should be included in the main text of the paper instead.
Specific Comments:
Lines 22 – 24. The sensitivity to cloud microphysics in coarse climate models vs. global storm resolving models was given as a motivation for this study, but there's no real discussion anywhere in the paper of the results of this study in the context of how these results compare against coarser scale climate models and their sensitivity to cloud microphysics.
Lines 25-27. Is there really considerable diversity in the representation of cloud microphysical models across the DYAMOND simulations? Cloud microphysical schemes in these models are limited to one or two moment bulk cloud parameterizations which have many known limitations.
Lines 50-51. There have been a number of studies that have demonstrated the advantages of multi-moment schemes over single moment schemes. It’s also worth pointing out here that there is physical uncertainty in many of the microphysical processes.
Lines 81-86. Are there potential biases in terms of using simulations that focus on a two-week time-period during Northern Hemisphere winter?
Section 2.2. The description of the sensitivity experiments should be included in the main text, not as part of the appendix. It makes it very hard to follow or understand the rest of the paper without these descriptions being in the main text.
It would be useful to have a figure or table illustrating the different microphysical processes that are accounted for in both schemes. Given that the differences between the schemes are not solely due to using a one moment or two moment representation for the droplet or particle size distributions, it would be easier for the reader to follow the comparison between the different simulations if it is clear what process rates are included in each scheme.
Lines 105-107. More justification is needed for the choice of plausible parameter values based on past literature.
Lines 107-110. Was there any comparison done against observations?
Figure 1. Why is there a slash in the x and y labels between the variable name and the units? I think this is supposed to the denote the units in some way but this not conventional. The label should describe what is shown in words (e.g. mass mixing ratio or number concentration), not use notation that is not clearly defined. The right-hand figure would also be clearer if the number and mass were plotted on different y-axis (left and right for example). I also don’t clearly understand what is shown here. It’s not clear when what is shown represents the “typical” one and two moment schemes or those modified for the sensitivity experiments.
Lines 124. What is a trimodal cloud fraction structure?
Figure 2. Again, the notation and labels here do not follow conventional notation and are very unclear and hard to understand. The color scheme is also challenging to read. What is specific water vapor at difference to 2mom? Is this the default 2mom, and how is it defined? Why is it shown here in the overview figure of the ensemble rather than as a separate figure? There’s barely any explanation and discussion of this figure in the text so the choices that are made are frankly confusing. Since the main result discussed in the text here is the difference between the snow and ice amounts between the one and two moment schemes, why not have a figure that emphasizes only those 2 panels?
Since the main difference between the one and two moment schemes seems to be associated with the snow and ice amounts, it would make sense to include a sensitivity experiment where the auto-conversion threshold between ice and snow is changed in the one moment scheme. How does the typical snow-ice auto-conversion threshold differ in the default simulations between the one and two moment schemes?
It's also unclear how much difference in cloud cover between the two schemes plays a role in the interpretation of Figure 2, and potential feedbacks between cloud microphysics and macrophysics were not discussed in the text in this section. Since all of the metrics are aggregated as global or regional mean profiles, it’s not clear whether these types of effects would be clear from this analysis.
Figure 4. Again, the notation and labels should be described in words rather than using notation. If the control in each case refers to the default 1 moment or default 2 moment scheme, it’s really not clear to put these both on the same plot, since the normalization means two different things in this case. The choice of axis ranges is also quite strange in this figure; why are the ticks not evenly spaced or labeled in this figure? It’s really hard to understand why these choices were made for the presentation of the data here, and the explanation in the text gives no explanation either.
Figure 5. This figure needs to be larger, and I also think it would be easier to interpret if the TOA and surface were plotted as different figures, rather than a single continuous figure, as Epsilon here means two different things. What do the error bars represent here? It would also be helpful to more clearly differentiate between the 1 and 2 moment schemes (using for example filled or empty markers, rather than just differences in shape). Since the sensitivity experiments between 1 moment and 2 moment are not directly comparable, it also doesn’t really make sense to use the same colors as though they are directly comparable. What is IvP ?
Figure 6. This figure is also too small.
Conclusions. It would be useful to discuss how the results for global storm resolving models compare against the sensitivity of cloud microphysics in coarser climate models. Other recent studies have also pointed to the important of ice fall speeds or ice-snow auto-conversion in perturbed parameter ensembles of CAM6 [Duffy et al. 2024] and it would be useful to cite these results here in the context of the GSRM sensitivity study.
Citation: https://doi.org/10.5194/egusphere-2024-2268-RC4 - AC1: 'Response to reviews', Ann Kristin Naumann, 15 Nov 2024
Model code and software
Code for Naumann et al. "How the representation of microphysical processes affects tropical condensate in a global storm-resolving model" Ann Kristin Naumann https://doi.org/10.17617/3.OD9NTK
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
326 | 146 | 23 | 495 | 10 | 11 |
- HTML: 326
- PDF: 146
- XML: 23
- Total: 495
- BibTeX: 10
- EndNote: 11
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1