the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
GeoFlood (v1.0.0): Computational model for overland flooding
Abstract. This paper presents GeoFlood, a new open-source software package for solving the shallow-water equations (SWE) on a quadtree hierarchy of mapped, logically Cartesian grids managed by the parallel, adaptive library ForestClaw (Calhoun and Burstedde, 2017). The GeoFlood model is validated using standard benchmark tests from Neelz and Pender (2013) as well as the historical Malpasset dam failure. The benchmark test results are compared against those we obtained from GeoClaw (Clawpack Development Team, 2020) and the software package HEC-RAS (Hydraulic Engineering Center- River Analysis System, Army Corp of Engineers) (Brunner, 2018). The Malpasset outburst flood results are compared with those presented in George (2011) (obtained from the GeoClaw software), model results from Hervouet and Petitjean (1999), and empirical data. The comparisons validate GeoFlood’s capabilities for idealized benchmarks compared to other commonly used models as well as its ability to efficiently simulate highly dynamic floods in complex terrain, consistent with historical field data. Because it is massively parallel and scalable, GeoFlood may be a valuable tool for efficiently computing large-scale flooding problems at very high resolutions.
Status: open (extended)
- CC1: 'Comment on egusphere-2025-2173', Xiaofeng Liu, 19 Sep 2025 reply
-
RC1: 'Comment on egusphere-2025-2173', Anonymous Referee #1, 31 Mar 2026
reply
Journal scope and acronym confusion: Â The release of a new flooding solver falls more in the remit of journals like GMD or JAMES rather than NHESS! This is especially true because the assessments do not consider detailed risk analysis for any proper real-world case study but rather on existing benchmarks that are well known to validate flood modelling numerical solvers.Â
The are also existing models called “GeoFlood” and this create a bit of confusion for the acronym chosen (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018WR023457; https://www.mdpi.com/2673-7418/6/1/19). Why not stick with GeoClaw given its legacy and popular use and acknowledge the proposed developments as improvements to GeoClaw? Â
Technical novelty:Â This is not clear to me as to How this work significantly differs from that of Qin et al. 2019 work on GeoClaw (https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2019MS001635) with hybrid CPU/GPU parallelisation? Both use the augmented Riemann solver (George 2011), both are based on the shallow water equations. Both use AMR but the so-called GeoFlood seems to have explored more refinement AMR criteria and portability to different computing platforms; if so, it is not clear how these additions add significant improvement to GeoClaw. Detailed analyses are needed supported by accuracy and efficiency metrics for all the tests (missing from the paper).Â
As for HEC-RAS using the the SWE-ELM solver to support comparisons, its validity remains questionable (https://doi.org/10.1016/j.jhydrol.2021.126962) compared to a Riemann based solver: the SWE-ELM solver uses an implicit finite volume scheme with a Newton-Raphson iteration and not a Riemann solver as with Geo-Claw/Flood!    Â
Introduction and “overland flooding” scope: The introduction excludes a decent review of existing (structured grid based) flood flow solvers so that one can understand “why this work?” compared to major similar developments, a few of which cited hereafter: https://doi.org/10.1029/2019MS001635; https://doi.org/10.5194/gmd-14-7117-2021; https://doi.org/10.5194/gmd-16-2391-2023; https://doi.org/10.5194/gmd-18-9827-2025 (many more).Â
It is not clear why/how the proposed GeoFlood model is any different than GeoClaw, aside from efficiency enhancement and exploration of different criteria for AMR. For a model release with “overland flooding”, rain-on-grid capability (pluvial floods) is necessary as well as a catchment scale case study for validating it (many case studies data on https://zenodo.org/records/6907286). In other words, the Maplasset dam-break case does not really help here, and was already showcased with GeoClaw. There are plenty of other overall flooding (fluvial) cases like Carlisle 2005 with its set-up data also online (https://zenodo.org/records/5047565).  Â
Comparative scope: The comparison with reference to HEC-RAS is not convincing. The same can be said for the comparison with GeoClaw. In fact, HEC-RAS has a different numerical make up (e.g., implicit and no Riemann solver) whereas the so-called “GeoFlood” looks a (so far minor) upgrade of “GeoClaw” (such as portability into different platforms). The key reference solver or model for comparison must be that of GeoClaw/GeoFlood on the finest grid resolution available. Otherwise, there is no proper way to validate.Â
Signs of unreliable validation: The first benchmark (Figure 5-6) has a flat terrain, which is where any AMR solver will lead to conservative results. I don’t think this is a suitable benchmark to showcase the merit of the GeoFlood solver.Â
The second benchmark (Figure 9) is better for two reasons (not recognised by the authors): first, because this is this is fluvial flood, its slow/frictional wave propagation pauses a challenge on AMR sensors; and more importantly, gauge points 5, 10 and 11 (after the inflow forcing become inactive and flow propagation is left to the solver). That is why in the authors’ Figure 9 we can see major differences at gauge points 5 and 10 (11 is not included). Besides, the common benchmarking resolution for this test is 20 m x 20 m despite the advantage of having a 2 m x 2 m DEM resolution. Why did the authors consider the coarsest resolution to be at 10 m, thus refinement to 5 m (level 1) and 2.5 m (level 2), while at the same time running HEC-RAS at 10 m resolution? If anything, HEC-RAS (uniform mesh) should be at the 2 m x 2 m DEM resolution as a reference prediction to be retrieved by (AMR-based) solvers. Aside, there is an openly accessible FV2-MUSCL (Riemann solver based) uniform grid simulation result at the 2 m x 2 m DEM resolution [Sec. 4.3.3. Slow filling of multiple ponds here: https://zenodo.org/records/5921132].Â
The third benchmark’s grid patterns (Figure 11) and results (Figure 12) do not make sense: Fig. 11 shows that the finest resolution spanning the block(s) remain too coarse compared to what it should be (see for example https://doi.org/10.1016/j.advwatres.2020.103559); should be 0.1 or 0.2 m (0.3 m is too large and can yield overestimated prediction as is the case with the authors). Why is Manning 0.05 (should be 0.01) [https://doi.org/10.1016/j.jhydrol.2020.12592]? What is the resolution for HEC-RAS?: Should be the finest 0.1 m uniform grid for which there is experimental data (this is not reported). Fig. 12 excludes the experimental data of water level and velocities (e.g. https://doi.org/10.1016/j.jhydrol.2020.125924) and have higher than usual predictions; why not using the same condition as Neelz and Pender (2013) since the other flood models were benchmarked under these conditions and with the presence of the UCL experimental data in the plots. Why not using the same conditions as in Neelz and Pender (2013), noting also that the UCL experiments (excluded from the authors’ results) and the Manning is 0.01 (not 0.05) and initial depth condition of 0.4 m to 0.02 m (not 8 m to 0.4 m as the authors did):
“4.6.1 Test 6A: laboratory scale
4.6.1.1 Introduction
This dam-break test case (see Appendix A.6 for details) is the original benchmark test case available from the IMPACT project (Soares-Frazao and Zech, 2002), for which measurements from a physical model at the Civil Engineering Laboratory of the Université Catholique de Louvain (UCL) are available. The physical dimensions are those of the laboratory model. The test involves a simple topography, a dam with a 1m wide opening, and a building downstream of the dam, see Figure 13. An initial condition is applied, with a uniform water level of 0.4m upstream from the dam, and 0.02m downstream.”
The Maplasset benchmark (Figure 15) is limited to the comparison of MAX data. This is quite easy data to retrieve by any solver. Rather, what speaks better here is the ARRIVAL TIME comparison; but the authors do not include them.Â
In Figure 16: What makes the so-called GeoFlood scale better than GeoClaw in relation to AMR? Why CFL = 0.75 here while it is = 0.9 in the more critical case of dam-break past the building block? Why only quantifying scalability and efficiency for the Maplaset test: what about the others?Â
Recommendation: I cannot recommend the publication of this paper NHESS. The paper needs a substantial amount of work beyond a major revision and may be better fit to a journal like GMD. In case the authors wish to revise and submit: It is better to formulate the paper as an improvement to GeoClaw with a new version release. The introduction should provide a full coverage of developments in AMR and Riemann based finite volume SWE flooding solver so that to justify the motivation for this work: what value does it add to the current literature and model? The technical description should revolve around the present improvement added to GeoClaw: how it is better than the existing one with hybrid CPU/CPU parallelisation? The results should have metric based evaluations echoing any improvement with respect to the existing GeoClaw version and with respect to the same underlying solver on the finest uniform grid (not HEC-RAS). In doing so, a better test case for benchmarking can be selected (if reusing the dam-break with the obstacle, it should be the same one as Neelz and Pender 2013), to include a proper flooding case like the openly accessible Carlise flood.     Â
Citation: https://doi.org/10.5194/egusphere-2025-2173-RC1
Data sets
Datasets used in GeoFlood comparison to other models Brian Kyanjo https://doi.org/10.5281/zenodo.10897305
Model code and software
GeoFlood model Brian Kyanjo, Donna Calhoun https://doi.org/10.5281/zenodo.10929142
User Manual for running all codes used in the paper Brian Kyanjo https://drive.google.com/file/d/1O3QizHHNUrOUjw6Uw-G_2tLBPcZgT-PP/view?usp=sharing
Viewed
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 748 | 0 | 1 | 749 | 0 | 0 |
- HTML: 748
- PDF: 0
- XML: 1
- Total: 749
- BibTeX: 0
- EndNote: 0
Viewed (geographical distribution)
Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
This manuscript introduces and describes GeoFlood, a new open-source software for solving SWEs on quadtree meshes using ForestClaw. The new model is compared with GeoClaw and HEC-RAS. The model has its value in the modeling of flood in large domains. I have the following comments:
- Â It will be easy for the reviewer if the line numbers are turned on.
- Second paragraph in the Introduction: the authors claim that shock-capturing fvm have become the dominant class of schemes. I disagree. Most commonly used code in practice, e.g., HEC-RAS 2D, do not use the shock-capturing scheme (Riemann solver). The reason is its limitation on time step size due the explicit nature of the scheme.Â
- In all the test cases, when comparing with GeoClaw and HEC-RAS, do they have similar and comparable meshes and resolutions? If the mesh resolutions are significantly different, then the computing efficiency comparison is not on the same basis.Â
- Fig 8: It is better to use two color maps for show the bathymetry and water depth separately.Â
- Can meshes i GeoFlood be coarsened in addition to refinement? The reason is that after the front has passed, the mesh can be coarsened to reduce computational cost.Â
- Fig 16: Need to define parallel efficiency. What is it?