the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
LEX v1.4: A New Large-Eddy Simulation Model in JAX with GPU Acceleration and Automatic Differentiation
Abstract. Large-eddy simulations (LESs) are essential tools for studies on atmospheric turbulence and clouds and play critical roles in the development of turbulence and convection parameterizations. Current numerical weather models have approached kilometer-scale resolution as supercomputing facilities advance. However, this resolution range is in the so-called gray zone, where subgrid-scale (SGS) turbulence actively interacts with resolved motion and significantly influences the large-scale characteristics of simulated weather systems. Thus, developing SGS turbulence models for the gray zone requires new LES models, which must run sufficiently fast when simulating large domains and enable new approaches to develop SGS models. Here we used the Python library JAX to develop a new LES model. It is based on the generalized pseudo-incompressible equations formulated by Durran (2008). The new LES model is capable of adequate parallelism and can run at a fast speed with GPU acceleration. For a classic warm bubble case, the traditional Smagorinsky model fails to reproduce the correct structure evolution of the warm bubble, though it can modestly correct the rising speed in gray-zone resolution simulations. Utilizing the capability of JAX for automatic differentiation, we trained a deep learning-based SGS turbulence model for the same case. The trained deep learning SGS model, based on simple three-dimensional convolutional neural networks (CNNs), enables this physics-deep learning hybrid model to accurately simulate the expansion of the thermal bubble and the development of rotors surrounding the center of the bubble at a gray-zone resolution. The gray-zone simulation results are comparable to that of the benchmark LES resolution.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(1431 KB)
-
Supplement
(1397 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1431 KB) - Metadata XML
-
Supplement
(1397 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2025-2568', Gijs van den Oord, 24 Jul 2025
- AC1: 'Reply on RC1', Xingyu Zhu, 10 Nov 2025
-
RC2: 'Comment on egusphere-2025-2568', Anonymous Referee #2, 11 Sep 2025
Please find my comments in the attached pdf.
- AC2: 'Reply on RC2', Xingyu Zhu, 10 Nov 2025
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2025-2568', Gijs van den Oord, 24 Jul 2025
I very much like this manuscript and the effort to create a new atmospheric LES with a modern, flexible programming paradigm JAX. The twofold benefit: GPU acceleration and automatic differentiation, are well explained and illustrated with the warm bubble test cases and performance tables. I believe this will be a valuable research tool for future investigations into the value of machine learning for parameterizations crossing the grey zone.
Some minor remarks:
- Line 57: There is a listing of the advantages of the hybrid machine learning approach to atmospheric sciences, but weather forecast accuracy is in my opinion not one of them. Purely data-driven DL models, such as Google's graphcast and ECMWF's AIFS nowadays exceed forecast accuracy of both physical and hybrid-ML models. So I would recommend to remove the 'accurate weather predictions' from this line.
- In line 195 the authors explain that the accumulated MSE forms the basis of the loss calculation for training the SGS. Are the authors concerned with a 'smearing' effect of the SGS tendency, could this be observed?
- In line 203 it is explained that water vapor is clipped to positive values after each correction time step. Could this be resolved with an extra loss term?
- I would drop the end of the first paragraph in section 5.2, "This can be further proven...". It comes across as over-explanatory and the last sentence is not proper English.
- line 336: the statement that running LEX on 1 GPU is as fast as running CM1 on 600 cores does not follow from the numbers presented, it would require a strong scaling study from CM1 to verify this, and if that scaling is not ideal, CM1 will not reach the LEX performance at all on 600 cores. So please indicate whether this statement follows from separate performance measurements of CM1.
- Reference: I believe a reference to "JAX-Fluids: A fully-differentiable high-order computational fluid dynamics solver for compressible two-phase flows" (https://doi.org/10.1016/j.cpc.2022.108527) should be added as it closely aligns with this work.
Some missing pieces in this paper:
- I believe the manuscript lacks a proper explanation of the applied boundary conditions. I suspect you are using periodic lateral boundaries and a sponge layer at the top? Pleaase elaborate in the theoretical section.
- I would love to see a short example of a more realistic emergent cumulus case, and especially whether the trained SGS parameterization from the warm bubble can be transferred to a cloudy atmosphere.
- The manuscript lacks an outlook with respect to missing components in LAX: microphysics, radiation, surface scheme. It should be mentioned in the article what the status of these elements are and how this limits the applicability of LAX.
- There is no multi-GPU benchmarks in the paper. I believe JAX with XLA can scale across many GPU's, does LAX also scale beyond a single device? Please elaborate in the performance section.
- There is no mentioning of hyperparameter choices or tuning thereof in the SGS training section. It could be nice to add a small exploration of this.Â
Citation: https://doi.org/10.5194/egusphere-2025-2568-RC1 - AC1: 'Reply on RC1', Xingyu Zhu, 10 Nov 2025
-
RC2: 'Comment on egusphere-2025-2568', Anonymous Referee #2, 11 Sep 2025
Please find my comments in the attached pdf.
- AC2: 'Reply on RC2', Xingyu Zhu, 10 Nov 2025
Peer review completion
Journal article(s) based on this preprint
Data sets
LEX v1.4 Data Zhu Xingyu, Qu Yongquan, and Shi Xiaoming https://doi.org/10.5281/zenodo.15730773
Model code and software
LEX v1.4 Zhu Xingyu, Qu Yongquan, and Shi Xiaoming https://doi.org/10.5281/zenodo.15486687
Viewed
| HTML | XML | Total | Supplement | BibTeX | EndNote | |
|---|---|---|---|---|---|---|
| 2,154 | 125 | 33 | 2,312 | 53 | 37 | 47 |
- HTML: 2,154
- PDF: 125
- XML: 33
- Total: 2,312
- Supplement: 53
- BibTeX: 37
- EndNote: 47
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Xingyu Zhu
Yongquan Qu
Xiaoming Shi
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(1431 KB) - Metadata XML
-
Supplement
(1397 KB) - BibTeX
- EndNote
- Final revised paper
I very much like this manuscript and the effort to create a new atmospheric LES with a modern, flexible programming paradigm JAX. The twofold benefit: GPU acceleration and automatic differentiation, are well explained and illustrated with the warm bubble test cases and performance tables. I believe this will be a valuable research tool for future investigations into the value of machine learning for parameterizations crossing the grey zone.
Some minor remarks:
Some missing pieces in this paper: