the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Efficient and Stable Coupling of the SuperdropNet Deep Learning-based Cloud Microphysics (v0.1.0) to the ICON Climate and Weather Model (v2.6.5)
Abstract. Machine learning (ML) algorithms can be used in Earth System models (ESMs) to emulate sub-grid-scale processes. Due to the statistical nature of ML algorithms and the high complexity of ESMs, these hybrid ML-ESMs require careful validation. Simulation stability needs to be monitored in fully coupled simulations, and the plausibility of results needs to be evaluated in suitable experiments.
We present the coupling of SuperdropNet, a machine learning model for emulating warm rain processes in cloud microphysics, into ICON~2.6.5. SuperdropNet is trained on superdroplet simulations and predicts updates of the bulk moments for cloud and rain. It replaces the accretion, autoconversion, and self-collection of rain and cloud droplets in two-moment cloud microphysics. We address the technical challenge of integrating SuperdropNet, developed in Python and PyTorch, into ICON, written in Fortran, by implementing three different coupling strategies: embedded Python via the C Foreign Function Interface, pipes, and coupling of program components via YetAnotherCoupler (YAC). We validate the emulator in the warm bubble scenario and find that SuperdropNet runs stable within the experiment. In comparing experiment outcomes from the bulk moment scheme and SuperdropNet, we find that the results are physically consistent, and discuss differences that are observed for several diagnostic variables.
In addition, we provide a quantitative and qualitative computational benchmark for three different coupling strategies—embedded Python, coupler YAC, and pipes—and find that embedded Python is a useful software tool for validating hybrid ML-ESMs.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(712 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(712 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2047', Anonymous Referee #1, 14 Dec 2023
Arnold et al. present results from coupling a ML emulator of the superdroplet microphysics scheme McSnow to the ICON model. They replace the warm rain collision-coalescence processes with the ML model, SuperdropNet. They couple SuperdropNet to ICON is three different ways and compare the performance for a simple test of a rising warm bubble. They show that the coupling is stable and efficient and produces physically reasonable results. Overall, I thought this paper was very clearly written and concise. I have just a few minor comments.
L73: As someone unfamiliar with ICON, I found the description of the torus grid a bit confusing. Is this a 2-dimensional domain? And it's only 5km large? Or is the resolution 5km, not the domain length?Â
L73: Why is the timestep so short? Is this how short the dynamical timestep of ICON usually is? Is the microphysics stable with a longer timestep?
L98: The "cold atmosphere" comment seems a bit oversimplified. In the test later in the paper you show that there is seemingly only ice/snow without any liquid or rain. But in many situations for temperatures below freezing mixed-phase clouds are prevalent in the atmosphere. I would want to see a case with both liquid and ice to see how the ML and bulk schemes work together, or if this presents an additional challenge to the coupling.
L182: I do not understand the phrase "neither does coupling with SuperdropNet result in dramatically different values" -- Do you mean that the 3 coupling approaches all give the same results? Or is this still referring to the first part of that sentence that says no negative values are observed?
Fig 4: I think the latent heat flux should be proportional to the evaporation (LHF = rho*Lv*E). Therefore there is no need to show both panels (a) and (b). This is also evident because the figures look identical.
L202: As above, LHF and E are proportional, they both scale with the same factors. This discussion doesn't make sense to me.
Table 3: It would be easier to read if you swap the 3rd and 4th lines of the table so that the processes are ordered in descending order of time taken. Also, to match the text, and make it easier to read, you could write the times in μs rather than s. (Like L244).
L251/260: Can you not actually couple McSnow to ICON even for this very simple dynamical warm bubble experiment? Or is this result presented in the Sharma and Greenberg (in prep) paper? It would be very nice to show a comparison between the emulator and the superdroplets in the online setting.
Citation: https://doi.org/10.5194/egusphere-2023-2047-RC1 -
RC2: 'Comment on egusphere-2023-2047', Paul Bowen, 14 Dec 2023
General comments:
The authors present their work coupling SuperdropNet into the ICON model. SuperdropNet is a machine learning emulator of warm rain collision-coalescence. Part of the paper explores integrating SuperdropNet into the ICON model and identifies three methods to use. The numerical performance of these methods are compared for a standard test-case of a warm rising bubble against the ICON bulk scheme.
The coupling is successful for all methods and produces reasonable results on comparison with the current bulk scheme within ICON. I found this paper of merit and an interesting read. I do have minor comments and suggestions for improvements.
I note that the emulator paper Sharma and Greenberg 2023 is in preparation, and I appreciate the authors providing in the attached assets. If possible, it would help support this work to have the article submitted before this work is published. However, it does not significantly detract from this work to not yet have Sharma and Greenberg 2023 submitted.
Â
Â
Specific comments:
Line 25: Perhaps a more robust introduction to what an ML algorithm is, not everyone will be familiar. You could consider referencing and discuss first usages and speedups in atmospheric models, e.g. Chevallier et al 2000, Use of a neural-network-based long-wave radiative-transfer scheme in the ECMWF atmospheric model, and Krasnopolsky et al 2005, New Approach to Calculation of Atmospheric Model Physics: Accurate and Fast Neural Network Emulation of Longwave Radiation in a Climate Model. At the moment it is assumed that we all understand what an ML algorithm consists of.
Line 44: Here you say that SuperdropNet has been recently released, however I don’t think SuperdropNet has yet been released. I understand that you have made your integration of SuperdropNet into ICON available. I would make the distinction here, or, point to a standalone release of SuperdropNet, as I have been unable to find it.
Line 59/60: This first sentence encourages incorporating online testing/development into an emulator. I agree with this, however it is followed by describing PyTorch, Tensorflow, and others. It almost appears that this work is implying that PyTorch etc have functionality for online testing/development. I suggest separating this encouragement of online testing/development into its own paragraph.
Line 62: States that it is necessary to integrate the two programming languages with one another. I disagree with the use of ‘necessary’. Certainly I agree that it is much quicker and easier to use some sort of python-Fortran coupler, however I don’t think it is entirely necessary. You could for instance write your emulator in Fortran, certainly this has been done before.
Line 63: I don’t think the reference to Brenowitz and Bretherton 2019 supports the claim that it is necessary to integrate the two programming languages with each other. Brenowitz and Bretherton 2019 do provide a method of coupling, but I don’t see this work supporting that it is ‘necessary’ to do so.
Line 64: You claim that ‘hybrid ML-ESMs must be computationally powerful enough for verification experiments without requiring rewriting the ML code in Fortran’. It would certainly be helpful if you didn’t have to rewrite in Fortran, if the coupled Fortran-Python code performed similar speed-wise that would be excellent news, however rewriting ML code in Fortran is a viable development route. I am aware of research groups which are doing this.
Line 73: Do you mean 2D periodic boundaries in the x-direction? The wording of ‘torus grid’ is perhaps less common and could be confusing.
Line 73: I assume that the top and bottom of the domain is a fixed boundary and not part of this torus? It is somewhat unclear.
Line 73: Do you mean the domain length in the x-direction is 5km? What about the z-direction?
Line 73: It would be helpful to explicitly state that this is a 2D vertical slice, which has an x and a z dimension.
Line 82: What does McSnow stand for, looks like it might be a Monte-Carlo ice particle physics model. It would be worth writing this.
Line 82: The reference to Seifert and Rasp 2020 is misplaced here. In this context it reads as if Seifert and Rasp 2020 contributed to the development of McSnow, however this is not the case. I suggest separating this to stress the distinction between how Brdar and Seifert, and Seifert and Rasp, both contributed to the training of superdropNet.
Line 86: ‘we use multiple realisations of simulations to train SuperdropNet’ – it is my understanding that this paper has nothing to do with the actual training of SuperdropNet, and the training all happened in the (in prep) Sharma and Greenberg 2023. I would adjust this line to make the distinction obvious that Sharma and Greenberg 2003 did the training.
Line 116: ‘DKRZ Levante system’. This is the first introduction of this system, I would add a brief description of what this is.
Line 134: I would like to know what the limitations are of the DKRZ Levante system are such that you are not able to test performance in a heterogeneous setting. Perhaps there are very good reasons, but it is unclear here.
Lines 151-164: There are other ML libraries you could have added here. For example, there is FTA (Fortran Torch Adapter) as used in https://doi.org/10.3389/feart.2023.1149566. There is also the inference-engine written in Fortran: https://github.com/BerkeleyLab/inference-engine. I’m sure there are reasons why the authors omit these, perhaps these are not suited for this purpose, but consider adding with reasons why they are not appropriate.
Line 169: The bulk moment scheme in the two-moment cloud microphysics module of ICON is used, but it would be useful to include s description of how the bulk scheme functions and represents the collision-coalescence.
Figure 4: I do find the grey area a bit confusing for two reasons: 1. It does not appear to have an axis. 2. On fig4a it looks like the grey area starts at -2.5kg m-2 s-1 and on fig4b it looks like it starts at -6.5Wm-2. I think this would be rectified by adding another labelled y-axis for the grey area.
Fig4 description: Heat flux is mentioned here, I would state that you have a ‘larger heat flux out of the grid-cell’.
Fig7, lines 209-219: I would comment on how the SuperdropNet rain droplet profiles are very smooth in comparison to the bulk moment scheme. I would like it explained why the bulk moment scheme has these rather sharp increases at say around 4500m at 40mins, 60mins, 80mins.
Table 2: Comparing the bulk scheme to the SuperdropNet scheme seems somewhat odd. I suppose that the scheme which SuperdropNet is based upon is not in ICON and would be difficult to put in, so I understand why this comparison was done. I would like some explanation for why this speed comparison is still relevant, and why the scheme SuperdropNet is based upon cannot be put into ICON directly. I think you mention this in your conclusions, but a brief sentence here would be helpful too.
Line 225: Perhaps I have missed it in this work, but how much is the speedup of SuperdropNet over the model it is based on? This seems like an important thing to include.
Line 263: I would argue that it ‘would likely’ rather than ‘might’ increase performance. It’s fairly clear that you would see speedups by fully integrating a superdropNet model written in Fortran within ICON. I do agree however, that losing flexibility of development is a good reason why this was not done.
Â
Â
Technical corrections:
Line 7: ICON acronym should be properly introduced here.
Line 22: Reference order should be in date order, i.e. Christensen and Zanna 2022 should come first.
Line 29: Brenowitz and Bretherton (2018) should be in parenthesis.
Line 41: ‘total droplet concentration and the total liquid water content’ would be more explicit.
Line 55/56: References should be written in date order.
Line 71: ‘testcase’ should read ‘test case’.
Line 119: The reference to ‘Rigo and Fijalkowsi (2018)’ should be within parenthesis to read better.
Line 171: ‘incurred’ not ‘incured’.
Fig4 description: Replace ‘higher’ with ‘larger’. Higher can be ambiguous and be interpreted as ‘more positive’.
Line 263: ‘loosing’ should be written ‘losing’.
Â
Citation: https://doi.org/10.5194/egusphere-2023-2047-RC2 - AC1: 'Comment on egusphere-2023-2047', Shivani Sharma, 01 Mar 2024
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2047', Anonymous Referee #1, 14 Dec 2023
Arnold et al. present results from coupling a ML emulator of the superdroplet microphysics scheme McSnow to the ICON model. They replace the warm rain collision-coalescence processes with the ML model, SuperdropNet. They couple SuperdropNet to ICON is three different ways and compare the performance for a simple test of a rising warm bubble. They show that the coupling is stable and efficient and produces physically reasonable results. Overall, I thought this paper was very clearly written and concise. I have just a few minor comments.
L73: As someone unfamiliar with ICON, I found the description of the torus grid a bit confusing. Is this a 2-dimensional domain? And it's only 5km large? Or is the resolution 5km, not the domain length?Â
L73: Why is the timestep so short? Is this how short the dynamical timestep of ICON usually is? Is the microphysics stable with a longer timestep?
L98: The "cold atmosphere" comment seems a bit oversimplified. In the test later in the paper you show that there is seemingly only ice/snow without any liquid or rain. But in many situations for temperatures below freezing mixed-phase clouds are prevalent in the atmosphere. I would want to see a case with both liquid and ice to see how the ML and bulk schemes work together, or if this presents an additional challenge to the coupling.
L182: I do not understand the phrase "neither does coupling with SuperdropNet result in dramatically different values" -- Do you mean that the 3 coupling approaches all give the same results? Or is this still referring to the first part of that sentence that says no negative values are observed?
Fig 4: I think the latent heat flux should be proportional to the evaporation (LHF = rho*Lv*E). Therefore there is no need to show both panels (a) and (b). This is also evident because the figures look identical.
L202: As above, LHF and E are proportional, they both scale with the same factors. This discussion doesn't make sense to me.
Table 3: It would be easier to read if you swap the 3rd and 4th lines of the table so that the processes are ordered in descending order of time taken. Also, to match the text, and make it easier to read, you could write the times in μs rather than s. (Like L244).
L251/260: Can you not actually couple McSnow to ICON even for this very simple dynamical warm bubble experiment? Or is this result presented in the Sharma and Greenberg (in prep) paper? It would be very nice to show a comparison between the emulator and the superdroplets in the online setting.
Citation: https://doi.org/10.5194/egusphere-2023-2047-RC1 -
RC2: 'Comment on egusphere-2023-2047', Paul Bowen, 14 Dec 2023
General comments:
The authors present their work coupling SuperdropNet into the ICON model. SuperdropNet is a machine learning emulator of warm rain collision-coalescence. Part of the paper explores integrating SuperdropNet into the ICON model and identifies three methods to use. The numerical performance of these methods are compared for a standard test-case of a warm rising bubble against the ICON bulk scheme.
The coupling is successful for all methods and produces reasonable results on comparison with the current bulk scheme within ICON. I found this paper of merit and an interesting read. I do have minor comments and suggestions for improvements.
I note that the emulator paper Sharma and Greenberg 2023 is in preparation, and I appreciate the authors providing in the attached assets. If possible, it would help support this work to have the article submitted before this work is published. However, it does not significantly detract from this work to not yet have Sharma and Greenberg 2023 submitted.
Â
Â
Specific comments:
Line 25: Perhaps a more robust introduction to what an ML algorithm is, not everyone will be familiar. You could consider referencing and discuss first usages and speedups in atmospheric models, e.g. Chevallier et al 2000, Use of a neural-network-based long-wave radiative-transfer scheme in the ECMWF atmospheric model, and Krasnopolsky et al 2005, New Approach to Calculation of Atmospheric Model Physics: Accurate and Fast Neural Network Emulation of Longwave Radiation in a Climate Model. At the moment it is assumed that we all understand what an ML algorithm consists of.
Line 44: Here you say that SuperdropNet has been recently released, however I don’t think SuperdropNet has yet been released. I understand that you have made your integration of SuperdropNet into ICON available. I would make the distinction here, or, point to a standalone release of SuperdropNet, as I have been unable to find it.
Line 59/60: This first sentence encourages incorporating online testing/development into an emulator. I agree with this, however it is followed by describing PyTorch, Tensorflow, and others. It almost appears that this work is implying that PyTorch etc have functionality for online testing/development. I suggest separating this encouragement of online testing/development into its own paragraph.
Line 62: States that it is necessary to integrate the two programming languages with one another. I disagree with the use of ‘necessary’. Certainly I agree that it is much quicker and easier to use some sort of python-Fortran coupler, however I don’t think it is entirely necessary. You could for instance write your emulator in Fortran, certainly this has been done before.
Line 63: I don’t think the reference to Brenowitz and Bretherton 2019 supports the claim that it is necessary to integrate the two programming languages with each other. Brenowitz and Bretherton 2019 do provide a method of coupling, but I don’t see this work supporting that it is ‘necessary’ to do so.
Line 64: You claim that ‘hybrid ML-ESMs must be computationally powerful enough for verification experiments without requiring rewriting the ML code in Fortran’. It would certainly be helpful if you didn’t have to rewrite in Fortran, if the coupled Fortran-Python code performed similar speed-wise that would be excellent news, however rewriting ML code in Fortran is a viable development route. I am aware of research groups which are doing this.
Line 73: Do you mean 2D periodic boundaries in the x-direction? The wording of ‘torus grid’ is perhaps less common and could be confusing.
Line 73: I assume that the top and bottom of the domain is a fixed boundary and not part of this torus? It is somewhat unclear.
Line 73: Do you mean the domain length in the x-direction is 5km? What about the z-direction?
Line 73: It would be helpful to explicitly state that this is a 2D vertical slice, which has an x and a z dimension.
Line 82: What does McSnow stand for, looks like it might be a Monte-Carlo ice particle physics model. It would be worth writing this.
Line 82: The reference to Seifert and Rasp 2020 is misplaced here. In this context it reads as if Seifert and Rasp 2020 contributed to the development of McSnow, however this is not the case. I suggest separating this to stress the distinction between how Brdar and Seifert, and Seifert and Rasp, both contributed to the training of superdropNet.
Line 86: ‘we use multiple realisations of simulations to train SuperdropNet’ – it is my understanding that this paper has nothing to do with the actual training of SuperdropNet, and the training all happened in the (in prep) Sharma and Greenberg 2023. I would adjust this line to make the distinction obvious that Sharma and Greenberg 2003 did the training.
Line 116: ‘DKRZ Levante system’. This is the first introduction of this system, I would add a brief description of what this is.
Line 134: I would like to know what the limitations are of the DKRZ Levante system are such that you are not able to test performance in a heterogeneous setting. Perhaps there are very good reasons, but it is unclear here.
Lines 151-164: There are other ML libraries you could have added here. For example, there is FTA (Fortran Torch Adapter) as used in https://doi.org/10.3389/feart.2023.1149566. There is also the inference-engine written in Fortran: https://github.com/BerkeleyLab/inference-engine. I’m sure there are reasons why the authors omit these, perhaps these are not suited for this purpose, but consider adding with reasons why they are not appropriate.
Line 169: The bulk moment scheme in the two-moment cloud microphysics module of ICON is used, but it would be useful to include s description of how the bulk scheme functions and represents the collision-coalescence.
Figure 4: I do find the grey area a bit confusing for two reasons: 1. It does not appear to have an axis. 2. On fig4a it looks like the grey area starts at -2.5kg m-2 s-1 and on fig4b it looks like it starts at -6.5Wm-2. I think this would be rectified by adding another labelled y-axis for the grey area.
Fig4 description: Heat flux is mentioned here, I would state that you have a ‘larger heat flux out of the grid-cell’.
Fig7, lines 209-219: I would comment on how the SuperdropNet rain droplet profiles are very smooth in comparison to the bulk moment scheme. I would like it explained why the bulk moment scheme has these rather sharp increases at say around 4500m at 40mins, 60mins, 80mins.
Table 2: Comparing the bulk scheme to the SuperdropNet scheme seems somewhat odd. I suppose that the scheme which SuperdropNet is based upon is not in ICON and would be difficult to put in, so I understand why this comparison was done. I would like some explanation for why this speed comparison is still relevant, and why the scheme SuperdropNet is based upon cannot be put into ICON directly. I think you mention this in your conclusions, but a brief sentence here would be helpful too.
Line 225: Perhaps I have missed it in this work, but how much is the speedup of SuperdropNet over the model it is based on? This seems like an important thing to include.
Line 263: I would argue that it ‘would likely’ rather than ‘might’ increase performance. It’s fairly clear that you would see speedups by fully integrating a superdropNet model written in Fortran within ICON. I do agree however, that losing flexibility of development is a good reason why this was not done.
Â
Â
Technical corrections:
Line 7: ICON acronym should be properly introduced here.
Line 22: Reference order should be in date order, i.e. Christensen and Zanna 2022 should come first.
Line 29: Brenowitz and Bretherton (2018) should be in parenthesis.
Line 41: ‘total droplet concentration and the total liquid water content’ would be more explicit.
Line 55/56: References should be written in date order.
Line 71: ‘testcase’ should read ‘test case’.
Line 119: The reference to ‘Rigo and Fijalkowsi (2018)’ should be within parenthesis to read better.
Line 171: ‘incurred’ not ‘incured’.
Fig4 description: Replace ‘higher’ with ‘larger’. Higher can be ambiguous and be interpreted as ‘more positive’.
Line 263: ‘loosing’ should be written ‘losing’.
Â
Citation: https://doi.org/10.5194/egusphere-2023-2047-RC2 - AC1: 'Comment on egusphere-2023-2047', Shivani Sharma, 01 Mar 2024
Peer review completion
Journal article(s) based on this preprint
Data sets
The ICON model code (version 2.6.5) including the coupling modules and the experiment results are available for download. By accessing the ICON model code, you accept the license conditions of the original code that are included in the repository. C. Arnold, S. Sharma, T. Weigel https://doi.org/10.5281/zenodo.8320093
Model code and software
SuperdropNet inference code, and modules describing the coupling between SuperdropNet inference and generic Python code, analysis scripts and Jupyter notebooks, as well as the experiment description files C. Arnold, S. Sharma, T. Weigel https://doi.org/10.5281/zenodo.10069121
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
265 | 189 | 17 | 471 | 15 | 14 |
- HTML: 265
- PDF: 189
- XML: 17
- Total: 471
- BibTeX: 15
- EndNote: 14
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
Tobias Weigel
David Greenberg
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(712 KB) - Metadata XML