the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Breakups are Complicated: An Efficient Representation of Collisional Breakup in the Superdroplet Method
Abstract. A key constraint of particle-based methods for modeling cloud microphysics is the conservation of total particle number, which is required for computational tractability. The process of collisional breakup poses a particular challenge to this framework, as breakup events often produce many droplet fragments of varying sizes, which would require creating new particles in the system. This work introduces a representation of collisional breakup in the so-called "superdroplet" method which conserves the total number of superdroplets in the system. This representation extends an existing stochastic collisional-coalescence scheme and samples from a fragment-size distribution in an additional Monte Carlo step. This method is demonstrated in a set of idealized box model and single-column warm-rain simulations. We further discuss the effects of the breakup dynamic and fragment-size distribution on the particle size distribution, hydrometeor population, and microphysical process rates. This representation of collisional breakup is able to produce a stationary particle-size distribution, in which breakup and coalescence rates are approximately equal, and it recovers expected behavior such as precipitation suppression in the column model. Furthermore, representing breakup has potential benefits that extend beyond warm rain processes, such as the ability to capture mechanisms of secondary ice production in the superdroplet method. The breakup algorithm presented here contributes to an open-source pythonic implementation of the superdroplet method, `PySDM', which will facilitate future research using particle-based microphysics.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(4305 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4305 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
CC1: 'super-droplets make breakups easier', Axel Seifert, 28 Nov 2022
Dear colleagues,
Thanks for this very interesting manuscript. I appreciate the effort to find a breakup implementation that is appropriate for super-droplets. Having worked on collisional breakup myself, I take the liberty to make some subjective comments. This is not a formal review, just an opportunity for scientific discussion.
1. Filament breakup:
Low and List write about filament breakup that 'the original two drops usually reappeared in some fashion as recognizable remnants of the colliding drops' (LL82b, page 1609). In Figure 11 of LL82a the fragment distribution functions for filament breakup show three distinct peaks corresponding to the remnants of the two original drops and the small satellites. Hence, for filament breakup of two drops with masses A and B, the remnants of the original drops have masses A' and B' and in addition, we find a certain number of small satellite drops of mass C'. All three masses are distinctly different in case of filament breakup. I don't see how it is possible to describe the outcome of such a breakup event with only 2 super-droplets. Of course, you can assume that A'=B', but this is not what the laboratory experiments show. You can assume B'=C', but also this is inconsistent with drop physics. You may assume that there already exists a super-droplet with a mass similar to C', but this is an additional assumption. In my opinion, is it simply impossible for filament breakup to conserve the number of super-droplet and at the same time have an accurate description of drop physics seen in laboratory experiments.
2. Super-droplet breakup based on individual mass conservation
If you allow yourself to create (at least temporarily) new super-droplets, the implementation of collisional breakup in a super-droplet code is actually quite straightforward. The super-droplet approach has the great advantage that we can make use of individual mass conservation, which greatly simplifies the implementation of collisional breakup. Much of the complication of traditional breakup parameterizations arises from the fact that the size distribution of fragments has to conserve mass. In a super-droplet model, this mass conservation comes naturally. This makes super-droplets, in my opinion, the method of choice for collisional breakup and more accurate than the old-fashioned bin microphysics. Bin microphysics has only integral mass conservation and therefore needs additional closure assumptions to describe collisional breakup. This is described in the Appendix of
Bringi, V., Seifert, A., Wu, W., Thurai, M., Huang, G. J., & Siewert, C. (2020). Hurricane Dorian outer rain band observations and 1D particle model simulations: A case study. Atmosphere, 11(8), 879.
Sorry, that we did not bother to write a dedicated paper on this super-droplet breakup algorithm, but the implementation is really straightforward. For us, the focus was on understanding the physics of the raindrop size distribution in the observations using super-droplets as a tool to do so.
In this algorithm, it is eventually necessary to limit the number of super-droplets by merging similar-sized drops (the satellites of filament breakup). For droplets, which have only a one-dimensional particle distribution, this merging step is not difficult. In my opinion, it is also cleaner from a conceptual point of view to have an accurate description of the drop physics and a separate merging afterward. The latter is only necessary to achieve computational efficiency in 2d or 3d simulations. Separating physics from numerics is usually a good thing and also for Lagrangian particle methods we should strive to do so.3. Straub et al. fragment distribution function
A note of caution on the Straub et al. fragment distribution. Based on more recent laboratory experiments of
Szakall, M., & Urbich, I. (2018). Wind tunnel study on the size distribution of droplets after collision-induced breakup of levitating water drops. Atmospheric Research, 213, 51-56.
the McFarquhar (2004) or the original Low and List (1982) fragment distribution functions should be preferred over the one published by Straub et al. (2010). The filament mode of Straub et al. seems to be too narrow and does not compare favorably with the new laboratory data of Szakall and Urbich (2018). In my opinion, this issue is caused by some simplifications in the DNS simulation that are the empirical basis for the Straub et al. parameterization. As far as I remember, the raindrops were not oscillating in the initial condition of the DNS of Schlottke et al. (2010). Unfortunately, we were also not able to perform a large number of simulations for the same collision parameters back then. Doing all these DNS runs to sample the parameter space was challenging enough, especially since DNS needs to explicitly sample the eccentricity of the collision. This may explain the bias, especially in the width of fragment distribution of the satellites in filament mode.
4. Stochastic sampling of the fragment distribution
The existing parameterization of the fragment distribution (Low and List 1982, McFarquhar 2004, Straub et al. 2010) are a super-position of lognormal and Gaussian distributions. Hence, it is possible to sample directly from these analytic distributions. I don't see the need for the approach suggested in Appendix A. To apply these parameterizations, it is only necessary to first decide which breakup mode (filament, disc, sheet) occurs for the collision event, then the fragments can be sampled directly from the lognormal or Gaussian fragment distribution functions. At least, that's how my super-droplet implementation of collisional breakup works.
5. Limiter
I think the limiter equation (A2) is not consistent with the physics of collisional breakup. It is possible and consistent with lab measurements that the larger of the original droplets grows during the collision event, i.e. A' > A is possible. The limiter (A2) would suppress this behavior because is limits A' to max(A,B).
6. A methodological suggestion
As mentioned above, I would suggest developing a reference model that captures all the essential and known physics in detail. The beauty of the super-droplet approach is that it allows to do this very naturally. Based on this reference model, it is then possible to systematically investigate approximations and simplifications to improve computational efficiency. Especially in cloud physics, where validation with observations can be a challenge by itself, such a two-step approach is strongly recommended.
Thanks again for your nice work. I like the conceptual and idealized way of thinking. This helps to understand the interaction between the various processes.
Best regards, Axel
Citation: https://doi.org/10.5194/egusphere-2022-1243-CC1 -
AC1: 'Reply on CC1', Emily de Jong, 05 Dec 2022
Dear Axel,
We welcome your comments—thank you for initiating what I hope will be a fruitful discussion! I am glad to hear your feedback, considering your experience studying collisional breakup. Allow me to respond to each of your comments individually, and then I hope we can pursue the most interesting topics further.
- Filament breakup: You are correct in observing that the breakup method we propose cannot describe the outcome of a breakup event resulting in A’, B’, and C’ droplets. Using your notation, suppose we have the breakup event: A + B A’ + B’ + C’. In fact, we only represent either A’, B’, or C’, as one of the outgoing superdroplets contains excess uncollided droplets A, leaving only one superdroplet to represent the collision fragments. If we consider a single collision, this representation is indeed unphysical. However, averaged over many collisions, sampling from the fragment size distribution (including A’, B’, and C’) many times will recover the statistics of the overall size distribution. It is not a perfect or exact solution, but it does allow us to have a probabilistic representation of breakup without creating new superdroplets. The stochasticity that goes with sampling from the fragment sizes could also be a benefit in that two simulations might produce slightly different results, giving us some uncertainty bounds on the dynamics.
-
Thank you for drawing attention to your own implementation of breakup—I failed to reference it in the work, but will correct that in the revisions. I see that the merging of superdroplets is discussed in a related work (https://gmd.copernicus.org/articles/7/695/2014/), and that it involves creating an ordered list. I do not see any discussion of parallelization in the same work, which leads to my concern that such an operation would break the linearity that the original SDM implementation has. Is this how your computations proceed? How does it scale in practice?
The motivation for this implementation is to maintain the key virtues of the original SDM coalescence algorithm, namely that: (i) it has linear complexity in number of superdroplets; and (ii) it is embarrassingly parallelizable (constant state-vector size, with no data dependencies across particle pairs). It then follows that we have a crucial requirement to keep the number of superdroplets constant at all times, and may be unable to perform the merging your method requires.
-
I am very grateful to you for bringing up these caveats for the Straub/Schlottke parameterizations. If anything, your note suggests that we should implement all three parameterizations and observe whether there is any difference in the outcome in the simple box test cases. That will give some indication of how much sensitivity in the parameterization our simple “uniform breakup” algorithm can actually capture.
-
Perhaps the way we have presented the sampling in Appendix A is overly complicated or unnecessary, because in fact it accomplishes exactly the steps that you mention. We use the random number to decide first which mode of breakup occurs, rescale the random number, and then sample directly from the fragment size distribution. The only caveat to this is that the second step, sampling directly, requires an analytic cumulative distribution function. This is easily accomplished by some common computing packages, such as CUDA and Numba which both support the erfc function, and can be approximated mathematically where an analytic CDF is not available.
-
Thank you for pointing this out—it sounds like a more appropriate limiter then would be to limit A’ to (A+B) in size.
-
It would indeed be ideal to have a reference model with the exact solution, perhaps following the implementation you suggested above in (2) or even using an existing superdroplet-increasing code such as yours. The one challenge that I can foresee with doing this type of comparison is that both approaches still involve sampling from a distribution, which might lead to divergent behaviors unless we average over enough superdroplets or simulation instances.
Better yet, we could compare to a deterministic model instead, using the full fragment size distribution in any breakup step. One such option is exploring a set of cases with analytical solutions (as in https://iopscience.iop.org/article/10.1088/0305-4470/28/11/004). For other cases, a bin model would be the obvious choice, but given the caveats you warned about regarding mass conservation and closure assumptions, there may be other errors involved.
Thank you again for the detailed and thoughtful remarks, Axel, and for prompting what I hope will be an interesting discussion.
Best,
Emily
Citation: https://doi.org/10.5194/egusphere-2022-1243-AC1 -
CC2: 'stochastic mode selection', Axel Seifert, 20 Jan 2023
Dear Emily and co-authors,
Sorry, due to the holidays it took some time to continue with this discussion. Your idea of a stochastic sampling of the fragment modes is indeed intriguing. To be honest, I had some serious doubts that this would give the correct solution. When I implemented it based on your manuscript, I found my concerns confirmed at first because the resulting raindrop distribution looks very narrow and unrealistic. But then my colleague, Christoph Siewert, pointed out that a mass-weighted sampling would be more consistent and should give the correct behavior. My simulations suggest that this is indeed the correct implementation. Please take a look at the Supplement of this comment.
Best regards, Axel
-
AC1: 'Reply on CC1', Emily de Jong, 05 Dec 2022
-
RC1: 'Comment on egusphere-2022-1243', Anonymous Referee #1, 02 Jan 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-1243/egusphere-2022-1243-RC1-supplement.pdf
-
RC2: 'Comment on egusphere-2022-1243', Anonymous Referee #2, 30 Jan 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-1243/egusphere-2022-1243-RC2-supplement.pdf
-
AC2: 'Comment on egusphere-2022-1243', Emily de Jong, 07 Feb 2023
We are grateful to the two reviewers for their feedback, as well as to Axel Seifert and Christoph Siewert for their helpful and interactive comments on this work, including verification of a suggested modification to the algorithm. Based on the reviews and discussion, we intend to undertake major revisions to both improve the method and to make the manuscript more rigorous and suitable for publication. Among those revisions, we intend to take the following steps:
- Include a reference model for comparison to validate our method. We agree that concerns about a lack of verification and rigor in the method are justified. We will implement a reference model that does not make the simplification of sampling a single fragment size in order to validate the proposed method. Axel Seifert and Christoph Siewert have already provided a delightful example of how this validation may be undertaken. To aid the reproducibility of future microphysics comparisons, we hope to see more open-source access to such microphysics modeling codes.
- Perform mass-weighted sampling from fragment size distribution. Following from Axel's suggestions and preliminary results, we agree that a mass-weight sampling from the FSD is likely the physically correct choice. This will be validated against a reference model, as described above.
- Clarify the role of the receiver droplet during sampling. We believe that several of the reviewers' concerns may be clarified through the language and conceptual figures included in the manuscript. Notably, we will clarify that the large receiver droplet is included in the fragment size distribution (rather than neglected), and therefore may be the sampled "fragment size" during the Monte Carlo step.
In addition we will explore the possibility of the suggestion to include a convergence study of the proposed method with number of superdroplets, as well as the suggestion to perform a 2D simulation. Other comments regarding the choice of language in the manuscript and relevant to secondary ice production will also be addressed. We hope that after undertaking these revisions, the community will find this work a useful contribution to the field and a basis for further development of particle-based microphysics.
Citation: https://doi.org/10.5194/egusphere-2022-1243-AC2
Interactive discussion
Status: closed
-
CC1: 'super-droplets make breakups easier', Axel Seifert, 28 Nov 2022
Dear colleagues,
Thanks for this very interesting manuscript. I appreciate the effort to find a breakup implementation that is appropriate for super-droplets. Having worked on collisional breakup myself, I take the liberty to make some subjective comments. This is not a formal review, just an opportunity for scientific discussion.
1. Filament breakup:
Low and List write about filament breakup that 'the original two drops usually reappeared in some fashion as recognizable remnants of the colliding drops' (LL82b, page 1609). In Figure 11 of LL82a the fragment distribution functions for filament breakup show three distinct peaks corresponding to the remnants of the two original drops and the small satellites. Hence, for filament breakup of two drops with masses A and B, the remnants of the original drops have masses A' and B' and in addition, we find a certain number of small satellite drops of mass C'. All three masses are distinctly different in case of filament breakup. I don't see how it is possible to describe the outcome of such a breakup event with only 2 super-droplets. Of course, you can assume that A'=B', but this is not what the laboratory experiments show. You can assume B'=C', but also this is inconsistent with drop physics. You may assume that there already exists a super-droplet with a mass similar to C', but this is an additional assumption. In my opinion, is it simply impossible for filament breakup to conserve the number of super-droplet and at the same time have an accurate description of drop physics seen in laboratory experiments.
2. Super-droplet breakup based on individual mass conservation
If you allow yourself to create (at least temporarily) new super-droplets, the implementation of collisional breakup in a super-droplet code is actually quite straightforward. The super-droplet approach has the great advantage that we can make use of individual mass conservation, which greatly simplifies the implementation of collisional breakup. Much of the complication of traditional breakup parameterizations arises from the fact that the size distribution of fragments has to conserve mass. In a super-droplet model, this mass conservation comes naturally. This makes super-droplets, in my opinion, the method of choice for collisional breakup and more accurate than the old-fashioned bin microphysics. Bin microphysics has only integral mass conservation and therefore needs additional closure assumptions to describe collisional breakup. This is described in the Appendix of
Bringi, V., Seifert, A., Wu, W., Thurai, M., Huang, G. J., & Siewert, C. (2020). Hurricane Dorian outer rain band observations and 1D particle model simulations: A case study. Atmosphere, 11(8), 879.
Sorry, that we did not bother to write a dedicated paper on this super-droplet breakup algorithm, but the implementation is really straightforward. For us, the focus was on understanding the physics of the raindrop size distribution in the observations using super-droplets as a tool to do so.
In this algorithm, it is eventually necessary to limit the number of super-droplets by merging similar-sized drops (the satellites of filament breakup). For droplets, which have only a one-dimensional particle distribution, this merging step is not difficult. In my opinion, it is also cleaner from a conceptual point of view to have an accurate description of the drop physics and a separate merging afterward. The latter is only necessary to achieve computational efficiency in 2d or 3d simulations. Separating physics from numerics is usually a good thing and also for Lagrangian particle methods we should strive to do so.3. Straub et al. fragment distribution function
A note of caution on the Straub et al. fragment distribution. Based on more recent laboratory experiments of
Szakall, M., & Urbich, I. (2018). Wind tunnel study on the size distribution of droplets after collision-induced breakup of levitating water drops. Atmospheric Research, 213, 51-56.
the McFarquhar (2004) or the original Low and List (1982) fragment distribution functions should be preferred over the one published by Straub et al. (2010). The filament mode of Straub et al. seems to be too narrow and does not compare favorably with the new laboratory data of Szakall and Urbich (2018). In my opinion, this issue is caused by some simplifications in the DNS simulation that are the empirical basis for the Straub et al. parameterization. As far as I remember, the raindrops were not oscillating in the initial condition of the DNS of Schlottke et al. (2010). Unfortunately, we were also not able to perform a large number of simulations for the same collision parameters back then. Doing all these DNS runs to sample the parameter space was challenging enough, especially since DNS needs to explicitly sample the eccentricity of the collision. This may explain the bias, especially in the width of fragment distribution of the satellites in filament mode.
4. Stochastic sampling of the fragment distribution
The existing parameterization of the fragment distribution (Low and List 1982, McFarquhar 2004, Straub et al. 2010) are a super-position of lognormal and Gaussian distributions. Hence, it is possible to sample directly from these analytic distributions. I don't see the need for the approach suggested in Appendix A. To apply these parameterizations, it is only necessary to first decide which breakup mode (filament, disc, sheet) occurs for the collision event, then the fragments can be sampled directly from the lognormal or Gaussian fragment distribution functions. At least, that's how my super-droplet implementation of collisional breakup works.
5. Limiter
I think the limiter equation (A2) is not consistent with the physics of collisional breakup. It is possible and consistent with lab measurements that the larger of the original droplets grows during the collision event, i.e. A' > A is possible. The limiter (A2) would suppress this behavior because is limits A' to max(A,B).
6. A methodological suggestion
As mentioned above, I would suggest developing a reference model that captures all the essential and known physics in detail. The beauty of the super-droplet approach is that it allows to do this very naturally. Based on this reference model, it is then possible to systematically investigate approximations and simplifications to improve computational efficiency. Especially in cloud physics, where validation with observations can be a challenge by itself, such a two-step approach is strongly recommended.
Thanks again for your nice work. I like the conceptual and idealized way of thinking. This helps to understand the interaction between the various processes.
Best regards, Axel
Citation: https://doi.org/10.5194/egusphere-2022-1243-CC1 -
AC1: 'Reply on CC1', Emily de Jong, 05 Dec 2022
Dear Axel,
We welcome your comments—thank you for initiating what I hope will be a fruitful discussion! I am glad to hear your feedback, considering your experience studying collisional breakup. Allow me to respond to each of your comments individually, and then I hope we can pursue the most interesting topics further.
- Filament breakup: You are correct in observing that the breakup method we propose cannot describe the outcome of a breakup event resulting in A’, B’, and C’ droplets. Using your notation, suppose we have the breakup event: A + B A’ + B’ + C’. In fact, we only represent either A’, B’, or C’, as one of the outgoing superdroplets contains excess uncollided droplets A, leaving only one superdroplet to represent the collision fragments. If we consider a single collision, this representation is indeed unphysical. However, averaged over many collisions, sampling from the fragment size distribution (including A’, B’, and C’) many times will recover the statistics of the overall size distribution. It is not a perfect or exact solution, but it does allow us to have a probabilistic representation of breakup without creating new superdroplets. The stochasticity that goes with sampling from the fragment sizes could also be a benefit in that two simulations might produce slightly different results, giving us some uncertainty bounds on the dynamics.
-
Thank you for drawing attention to your own implementation of breakup—I failed to reference it in the work, but will correct that in the revisions. I see that the merging of superdroplets is discussed in a related work (https://gmd.copernicus.org/articles/7/695/2014/), and that it involves creating an ordered list. I do not see any discussion of parallelization in the same work, which leads to my concern that such an operation would break the linearity that the original SDM implementation has. Is this how your computations proceed? How does it scale in practice?
The motivation for this implementation is to maintain the key virtues of the original SDM coalescence algorithm, namely that: (i) it has linear complexity in number of superdroplets; and (ii) it is embarrassingly parallelizable (constant state-vector size, with no data dependencies across particle pairs). It then follows that we have a crucial requirement to keep the number of superdroplets constant at all times, and may be unable to perform the merging your method requires.
-
I am very grateful to you for bringing up these caveats for the Straub/Schlottke parameterizations. If anything, your note suggests that we should implement all three parameterizations and observe whether there is any difference in the outcome in the simple box test cases. That will give some indication of how much sensitivity in the parameterization our simple “uniform breakup” algorithm can actually capture.
-
Perhaps the way we have presented the sampling in Appendix A is overly complicated or unnecessary, because in fact it accomplishes exactly the steps that you mention. We use the random number to decide first which mode of breakup occurs, rescale the random number, and then sample directly from the fragment size distribution. The only caveat to this is that the second step, sampling directly, requires an analytic cumulative distribution function. This is easily accomplished by some common computing packages, such as CUDA and Numba which both support the erfc function, and can be approximated mathematically where an analytic CDF is not available.
-
Thank you for pointing this out—it sounds like a more appropriate limiter then would be to limit A’ to (A+B) in size.
-
It would indeed be ideal to have a reference model with the exact solution, perhaps following the implementation you suggested above in (2) or even using an existing superdroplet-increasing code such as yours. The one challenge that I can foresee with doing this type of comparison is that both approaches still involve sampling from a distribution, which might lead to divergent behaviors unless we average over enough superdroplets or simulation instances.
Better yet, we could compare to a deterministic model instead, using the full fragment size distribution in any breakup step. One such option is exploring a set of cases with analytical solutions (as in https://iopscience.iop.org/article/10.1088/0305-4470/28/11/004). For other cases, a bin model would be the obvious choice, but given the caveats you warned about regarding mass conservation and closure assumptions, there may be other errors involved.
Thank you again for the detailed and thoughtful remarks, Axel, and for prompting what I hope will be an interesting discussion.
Best,
Emily
Citation: https://doi.org/10.5194/egusphere-2022-1243-AC1 -
CC2: 'stochastic mode selection', Axel Seifert, 20 Jan 2023
Dear Emily and co-authors,
Sorry, due to the holidays it took some time to continue with this discussion. Your idea of a stochastic sampling of the fragment modes is indeed intriguing. To be honest, I had some serious doubts that this would give the correct solution. When I implemented it based on your manuscript, I found my concerns confirmed at first because the resulting raindrop distribution looks very narrow and unrealistic. But then my colleague, Christoph Siewert, pointed out that a mass-weighted sampling would be more consistent and should give the correct behavior. My simulations suggest that this is indeed the correct implementation. Please take a look at the Supplement of this comment.
Best regards, Axel
-
AC1: 'Reply on CC1', Emily de Jong, 05 Dec 2022
-
RC1: 'Comment on egusphere-2022-1243', Anonymous Referee #1, 02 Jan 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-1243/egusphere-2022-1243-RC1-supplement.pdf
-
RC2: 'Comment on egusphere-2022-1243', Anonymous Referee #2, 30 Jan 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-1243/egusphere-2022-1243-RC2-supplement.pdf
-
AC2: 'Comment on egusphere-2022-1243', Emily de Jong, 07 Feb 2023
We are grateful to the two reviewers for their feedback, as well as to Axel Seifert and Christoph Siewert for their helpful and interactive comments on this work, including verification of a suggested modification to the algorithm. Based on the reviews and discussion, we intend to undertake major revisions to both improve the method and to make the manuscript more rigorous and suitable for publication. Among those revisions, we intend to take the following steps:
- Include a reference model for comparison to validate our method. We agree that concerns about a lack of verification and rigor in the method are justified. We will implement a reference model that does not make the simplification of sampling a single fragment size in order to validate the proposed method. Axel Seifert and Christoph Siewert have already provided a delightful example of how this validation may be undertaken. To aid the reproducibility of future microphysics comparisons, we hope to see more open-source access to such microphysics modeling codes.
- Perform mass-weighted sampling from fragment size distribution. Following from Axel's suggestions and preliminary results, we agree that a mass-weight sampling from the FSD is likely the physically correct choice. This will be validated against a reference model, as described above.
- Clarify the role of the receiver droplet during sampling. We believe that several of the reviewers' concerns may be clarified through the language and conceptual figures included in the manuscript. Notably, we will clarify that the large receiver droplet is included in the fragment size distribution (rather than neglected), and therefore may be the sampled "fragment size" during the Monte Carlo step.
In addition we will explore the possibility of the suggestion to include a convergence study of the proposed method with number of superdroplets, as well as the suggestion to perform a 2D simulation. Other comments regarding the choice of language in the manuscript and relevant to secondary ice production will also be addressed. We hope that after undertaking these revisions, the community will find this work a useful contribution to the field and a basis for further development of particle-based microphysics.
Citation: https://doi.org/10.5194/egusphere-2022-1243-AC2
Peer review completion
Journal article(s) based on this preprint
Model code and software
PySDM Sylwester Arabas; Piotr Bartman; Emily de Jong; Clare Singer; Michael A. Olesik; Oleksii Bulenok; Ben Mackay; Sajjad Azimi; Kamil Górski; Anna Jaruga; Bartosz Piasecki; Codacy Badger https://zenodo.org/record/7306034#.Y3fwHuzMI-Q
PySDM-examples Sylwester Arabas; Clare Singer; Emily de Jong; Sajjad Azimi; Piotr Bartman; Oleksii Bulenok; imdula; Ben Mackay; Anna Jaruga; Wenhan Tang https://zenodo.org/record/7308668#.Y3fwHuzMI-Q
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
720 | 251 | 21 | 992 | 7 | 6 |
- HTML: 720
- PDF: 251
- XML: 21
- Total: 992
- BibTeX: 7
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Cited
John Ben Mackay
Anna Jaruga
Sylwester Arabas
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4305 KB) - Metadata XML