SWEET – Shallow Water Equation Environment for Tests v1.0
Abstract. SWEET is an open-source software for numerical simulation of differential equations discretized with global spectral methods, both on the bi-periodic plane and the sphere. Although not directly restricted to it, its main focus is on numerical developments for the shallow water equations (SWE) since they play a crucial role in developing new numerical methods for climate and weather simulations.
SWEET's main purpose is to bridge the gap between the development of new time integration methods for atmospheric dynamical cores and high-performance computing. This is done by providing a fast and efficient environment for developing and analyzing time discretization methods for the SWE while reducing spatial errors to a minimum due to the utilization of global spectral methods. In addition, the performance of new time integration methods can be assessed on HPC systems. Regarding the numerics, this is achieved through a versatile implementation, allowing the user to quickly run and combine different time-stepping schemes, and flexibly choose the different terms of the governing equations (i.e., the different physical processes) to be considered in the time integration in a composable way through command line arguments, allowing a rapid exploration of time integration methods. Concerning HPC, SWEET supports various ways to explore parallel-in-time integration methods on large-scale HPC clusters. To analyze the results, SWEET also contains many benchmark tests, including standard test cases relevant to atmospheric modeling research. These features make SWEET a robust and powerful tool for researching temporal schemes for atmospheric circulation models.
This paper summarizes the main features of SWEET and provides some numerical examples illustrating its application.
General comments:
This paper presents a new Python and C++ spectral solver for the shallow water equations, and closely related models, on the torus and 2D sphere. The main novelty is the interface for constructing and composing advanced temporal integrators for these equations, for the purpose of developing, debugging, testing, and comparing timestepping techniques. A broad range of integrators are implemented in the code, including Runge-Kutta methods, exponential integrators, semi-Lagrangian schemes, spectral deferred correction, and parallel-in-time methods. Overall, this seems like a nicely scoped project with a clear purpose and strong execution. The paper is also well written. It provides a good pedagogical overview for timestepping techniques in geophysical flow simulations, and will be a useful reference for new users of the model. I recommend the paper for publication after addressing a few suggestions and technical corrections.
Specific comments:
1. In section (3.2 Parallelization), maybe some comment should be made regarding MPI / distributed memory parallelism. This is later discussed in the context of the parallel-in-time methods, but a quick reference to that upcoming discussion would be helpful here for completeness.
2. In section (4.5 Limitations), the lack of support for multi-step schemes is mentioned. It may be useful to add a bit more context here -- how prevalent are such schemes in other GFD codes? Do they have substantial advantages over the included schemes in any ways? Is adding then currently work-in-progress, or is it seen as out-of-scope for SWEET?
3. The code's efficiency is mentioned at several points, but not much concrete performance data is provided. This is perhaps not the main point of the model, but it would still be nice to provide some discussion and data about the code's performance for potential users. Figure 6b provides some wall clock data, but it would be helpful to reframe these measurements for at least one specific run in an interpretable metric (like DOF-iterations per cpu-second) in the discussion in the text.
4. The other notable absence is any mention of GPU implementation. This seems like it may not be too difficult, given that GPU support is now available in SHTns. Of course, this isn't necessary for building new timestepping schemes and studying their accuracy, but it is relevant for comparing the performance of different routines, since memory-vs-compute tradeoffs differ on CPUs vs GPUs. Many geophysical models are now moving to GPU architectures, so even if GPU support is not planned for SWEET, some discussion about this should be added (perhaps to the parallelism section).
Technical corrections:
5. Equation (16) is missing a time differential in the integral.
6. Equation (17) seems like it have a quadrature sum over f(u_m) in place of the integral.
7. Line 423: "low-expensive" -> "low-expense"