the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Coupling of the Ice-sheet and Sea-level System Model (version 4.24) with hydrology model CUAS-MPI (version 0.1) using the preCICE coupling library
Abstract. Accurate earth system models must include interactions between atmosphere, ocean, and continental ice sheets. To build such models, numerical solvers that compute the evolution of the different components are coupled. There are frameworks and libraries for coupling that handle the complex tasks of coordinating the solver execution, communicating between processes, and mapping between different meshes. This allows solvers to be developed independently without compromises on numerical methods or technology. Code reuse is improved, both over large, monolithic software systems that reimplement each coupled model as well as over ad-hoc coupling scripts.
In this work, we use the preCICE coupling library to couple the Ice-sheet and Sea-level System Model (ISSM) with the subglacial hydrology model CUAS-MPI. An adapter for each model is required that passes the meshes and coupled variables between the model and preCICE. We describe the generic, reusable adapters we developed for both models and demonstrate their features experimentally. We also include computational performance results for the coupled system on a high-performance computing cluster. Coupling with preCICE has low computational overhead and does not negatively impact scaling. Therefore, the presented software facilitates studies of the subglacial hydrology systems of continental ice sheets as well as coupling ISSM or CUAS-MPI with other codes such as in global earth system models.
- Preprint
(5460 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 23 Sep 2025)
-
RC1: 'Comment on egusphere-2025-3345', Moritz Hanke, 13 Aug 2025
reply
Summary:
This manuscript presents a new coupled setup that implements a two-way data exchange between the Ice-sheet and Sea-level System Mode (ISSM) and the subglacial hydrology model CUAS-MPI. The coupling interface between the two models is implemented using the generic coupling library preCICE. The "adapter" implemented as interfaces between the models and preCICE are described in detail. Simulations using this coupled configuration for a synthetic setup were performed to demonstrate the successful interaction between the two models. In addition, performance measurements were performed to showcase the low impact that preCICE imposes on the overall runtime and scalability of the coupled models.
The focus of the paper is on the technical aspects of the implementation of the preCICE adapters for the two models, the configuration of the coupled setup and the usage of a generic coupler in Earth system modelling in general. These points are not completely new and primarily of technical nature. The manuscript could be improved by enhancing its scientific significance. For example you could evaluate the results of the synthetic simulations in terms of how the two-way coupling improved the results compared to a stand-alone simulation that uses file-based input data. Additionally, a more detailed comparison of preCICE with specialised coupling libraries/frameworks (e.g. OASIS3-MCT, YAC, or ESMF) in terms of functionality and potential benefits could be included. Also the pros and cons of using a generic coupler vs a specialised one and their respective use-cases could be discussed.
I recognise the potential interest of the Earth system modelling community in preCICE being used in this context. Therefore, I would consider this manuscript for publication in GMD, after a major revision.
General comments:
Abstract:
Please formulate the goals and achievements of the work presented by this paper more clearly.Specific comments:
Line 1: "Accurate earth system models"
Is there such a thing as an accurate Earth system model?Line 30-31: "on different spatial and temporal discretizations, which complicates the coupling"
Isn't handling these issues one of the main functions of a coupler?Line 34: "time points where"
Maybe: "time intervals at which"Line 35: Repeated start of a sentence
Line 35-36: "provides sophisticated numerical methods"
Which "numerical methods" do you mean (e.g. implicit vs explicit time stepping, remapping methods, ...)? Please be more specific.Line 37-38: "easy to either add component like an existing ocean circulation model"
I do not think, that this is the case. Reasons will be discussed further below.Line 41 and 42: "Sect. 3"/"Sect. 4"
No abbreviation for "Section 2" was used.Line 46: "overview of the coupling"
Maybe: "overview of the coupling setup"Line 46: "existing codes"
Maybe: "existing three codes"Line 50: Add long form of preCICE abbreviation
Line 51-52: "handles communication, data mapping, and coordination of the solvers"
Maybe: "handles communication, data mapping, and coordination between the solvers" to explicitly exclude communication like halo/ghost cell exchanges within a solver.Line 53: "calls the preCICE library"
Maybe something like: "call initialisation routine of ..." would be more specific.Line 53: "all options"
Maybe: "all preCICE configuration options"Line 53-54: "This approach requires..."
In my opinion this is something for the conclusion.Line 54-55: "the respective algorithms are selected"
Be more specific on the algorithms. Which task will they perform.Paragraph 2.1:
Could you add more information about the adapters? Are they part of the solver, preCICE or independent codes. Who usually implements them (preCICE or model developers)?Line: 62-63: Sentence "To establish the communication ..."
To me these are implementation details not relevant for this paper. How about:
"Communication channels between the processes of both solvers are established using a highly scalable algorithm (Totounferoush et al., 2021)."Line 70-71: Sentence : "However, if we wanted..."
This might be more suited for the discussion.Paragraph 2.1.4
Could be part of the introduction as part of "state-of-the-art".Line 89-90: Sentence "One of the goals..."
Should be mentioned in the abstract and in my opinion would fit better in the introduction section.Line 133-134: Sentence "This could be resolved..."
Is this relevant for the paper?L149-152: Sentence "There are different...."
This general description of adapters could be included in section 2.1L164: "linear cell interpolation"
Add reference to clarify its meaning.L179: "Partial interfaces"
In Earth system modelling, I would consider the term "masking" more appropriate.L284: Section 3.1
Add minimal text to introduce this section.Figure 6:
What additional information is provided by this figure when compared to Figure 1?Section 3.1.2 Results:
If possible, this section could be improved by evaluating the results in terms of quality of simulation results: Does the coupled setup produce more accurate results than the uncoupled ones?Line 382: Sentence "This also includes..."
The imbalance between solver initialisation times could be explicitly measured by introducing an additional synchronisation point.Line 383-386: whole paragraph
Maybe simplify by: "3. Running the solver for one coupling window."Line 390: "i.e., everything that happens before the participants or the first participant in a serial coupling"
This part is unclear to me and looks incomplete. "everthing that happens before the participants [..] in a serial coupling" do what?Line 389-393: whole paragraph
Initially it was not clear to me, that in both cases (serial and parallel coupling) both models have their own dedicated processes, but that in the serial case the two models are allocated to the same resources, while in the parallel case have each have their own dedicated computing resources. This could potentially be phrased more clearly.
In MPI receive operations are often implemented as busy-loops that wait until the requested data is available. If the advance step contains an MPI receive operation, while waiting for the result from ISSM, CUAS might generate a significant load on the shared resources in the serial case. Therefore, a potentially interesting experiment could be the comparison of a setup with serial coupling using shared or dedicated resources.Line 394-397: whole paragraph
Does this represent a processes which in climate science is often refereed to as "spin-up"? Could this be avoided by initialising the models using restart files from a previous run?Figure 9: Sentence "With few processes, ..."
I would assume that the idle time for the 768-process case is higher due to CUAS having better scaling properties than ISSM. With 192 processes you seem to have hit a sweet spot, where both models roughly require the same time for the simulation of one coupling window.Line 401-403: whole paragraph
Do solver iteration counts differ between serial and parallel coupling? Could you quantify the differences between the execution time of a coupling window of both cases? Can these differences exclusively be explained by the imbalance between the two solvers, which can be measured by the "advance" timer?Figure 12:
For 192 processes ISSM has an execution time for a single coupling window of around 450 s and CUAS 200 s. However in Figure 9 both models seem to have for the same number of processes and each an execution time of around 650 s. Did I misinterpret the figures or how do you explain the inconsistency between the two figures?
Based on the measurements, I would assume that a parallel coupling setup with 512 processes for ISSM and 192 processes for CUAS should perform quite well. Do you agree? Why did you not use similar setups for the measurements of Figure 9?
I personally would start the performance analysis with this data and base the other measurements on its results.Figure 13:
Does this again contradict the measurements from Figure 9?Line 407-412: whole paragraph
Instead of defining a process count ratio, wouldn't it be easier to determine the number of processes for each component for a fixed coupling window execution time based on the measurements from Figure 12?Section 3.2.2 Results
The remapping and communication time between two coupling windows usually is negligible in climate setups. However, initialisation time can have a significant impact on the overall runtime and this is a factor, which is often of interest to users. Therefore, this should be analysed in more detail.
(See [1])Line 416-419. "CUAS and ISSM use [...] are easy to adapt."
This are properties which in my opinion are not unique to preCICE but shared with other coupling solutions. This could be clarified in the discussion.Line 420-425: whole paragraph
Please include an analysis of preCICE's impact on initialisation time.Line 425: Sentence "This imbalance..."
Can you explain why it is not possible in this case?Line 440-441: Sentence "For example, the ice sheet code..."
I was told that this example does not make much sense, since hydrology is already included in Elmer/Ice [3].Section Discussion:
If I am not mistaken, your adapter design should allow for a single executable setup using serial coupling, in which ISSM is called by the CUAS-preCICE adapter within the CUASSolver. This could significantly improve performance of the serial setup. Did you consider this option? Maybe you can a discussion about this.
I think in order to improve the scientific value of the paper you should extend the discussion on pros and cons of using a generic coupling library compared to one specialised for Earth system modelling. In case you are interested here are a few points: * Usually specialised couplers like OASIS3-MCT or YAC basically provide the coupling of 2D fields on a spherical surface. For global circulation models that simulate for example an atmosphere or an ocean, this constraint is adequate. For the coupling of these models specialised couplers are probably the best choice since they also provide user interfaces and remapping options that are optimised for these cases. In addition, these models have to my knowledge not the required software infrastructure to efficiently support implicit coupling provided by preCICE.
* For the remapping of properties like surface freshwater flux conservative remapping as described in [2] is used. This remapping scheme requires the usage of spherical geometry during the interpolation weight computation. This is currently not supported by preCICE and its support could become difficult. This could be overcome by supporting a remapping scheme based on user-provided weight files. These weight files could be produced in a preprocessing step by specialised tool (e.g. CDO, ESMF, or YAC).
* I think the best use-case for a general coupling library in earth system modelling are cases where the constrain to the coupling of 2D spherical fields is insufficent (e.g. glacier ground interaction as described in this paper). It could even be feasible and interesting to set up configuration where coupling between ISSM and CUAS is implemented using a general coupler like preCICE and in the same setup additional coupling with an atmosphere and or ocean model is implemented with a specialised coupler.Line 455-456 Sentence "The use of preCICE..."
See comment above.Line 465: "https://git.rwth-aachen.de/yannic.fischler/cuas-mpi"
The associated repository is not publicly available. The link to the version used for the paper is available, so removing the link to the current version would not be a problem.[1]: https://raw.githubusercontent.com/IS-ENES3/IS-ENES-Website/main/pdf_documents/IS-ENES2_D10.3_FV.pdf
[2]: https://data-ww3.ifremer.fr/TRAINING/DOC/SCRIPusers.pdf
[3]: https://kannu.csc.fi/s/6CRGEdSZPEajnL6Citation: https://doi.org/10.5194/egusphere-2025-3345-RC1 -
CC1: 'Reply on RC1, Request for Clarifications', Daniel Abele, 01 Sep 2025
reply
Thank you for the detailed review, you make very good points.
At this point, I would only like to ask for clarification on a few of your comments.
MH: ""
Figure 9: Sentence "With few processes, ..."
I would assume that the idle time for the 768-process case is higher due to CUAS having better scaling properties than ISSM. With 192 processes you seem to have hit a sweet spot, where both models roughly require the same time for the simulation of one coupling window.
""I'm not sure whether this comment is asking for any specific revision. What you say is true. We used the 2:1 distribution precisely because it is optimal for one total CPU count. the rest of the analysis then looks at what happens if we use the same distribution for other CPU counts or whether a different distribution works better. Does this need to be stated more clearly in the text?
MH: ""
Section Discussion:
If I am not mistaken, your adapter design should allow for a single executable setup using serial coupling, in which ISSM is called by the CUAS-preCICE adapter within the CUASSolver. This could significantly improve performance of the serial setup. Did you consider this option? Maybe you can a discussion about this.
""I don't think I understand what you mean, could you please clarify how you think ISSM would be integrated and how mapping/communication is done (obviously only on a conceptual level, not the technical details)? preCICE does not have an API to map data directly, one adapter writes data, the other adapter reads, everything else is internal to preCICE. So integration into one executable doesn't change anything. And even if it was possible, the same operations (communication + mapping) would be performed, so I don't see significant gains there. Integration also brings with it software engineering challenges that are avoided through separate executables.
Citation: https://doi.org/10.5194/egusphere-2025-3345-CC1
-
CC1: 'Reply on RC1, Request for Clarifications', Daniel Abele, 01 Sep 2025
reply
-
RC2: 'Comment on egusphere-2025-3345', Basile de Fleurian, 04 Sep 2025
reply
This study highlights the use of a generic coupling library (preCICE) to perform simulations coupling subglacial hydrology through CUAS-MPI and ice dynamics through ISSM. The study presents the different components hat were developed to achieve the coupling, namely an adapter for the communication between ISSM and preCICE and a coupler implemented in CUAS-MPI responsible of the communication between this model and preCICE. A few simulations are showcased to confirm the proper behaviour of the coupled model and to evaluate the efficiency of the coupling.
The focus of the study is mostly on the technical aspect relative to the coupling and its efficiency rather than the analysis of the results of the coupled simulation. Following that objective, the authors presented an extensive description of the software needed to achieve the coupling without delving in the results of the simulations they performed. I can understand the more technical approach but then I feel that the study is missing a proper comparison with existing couplers that could inform users on a choice of tool for future coupling. Rather the focus is toward using a coupler or integrating a solver in a larger numerical model which I feel is a different consideration.
I acknowledge the use of this study to introduce preCICE as a new coupling tool in the earth system community. However I feel that to give a proper advertising of the coupler, either some more substantial scientific results should be showcased or the author should make a clearer point about the advantages and drawbacks of using this specific coupler.
Specific comments:
Title:Depending on the focus of the final paper I think that the title should be modified to be a better reflection of the content of the study. In the current version I would expect a study about the results of the coupling itself rather than a presentation of the coupler. I would also exclude version numbers from the title and have them later in the text to avoid cluttering the title.
Introduction:
I noted a few places with missing references, I try to list them in the comments bellow but might forget some and advise the author to take specific care about that point.
- Line 19: The sentence on starting : "While the hydrological..." is not extremely clear to me, I would suggest to rephrase with something like : "While the hydrological system evolves on long time scales in the centre of ice sheets, at the margins and particularly in Greenland its evolution is faster especially during the melt season"
- Line 20 : Emphasis should be made here on the fact that the interaction between subglacial hydrology and ice dynamics are complex, there are a few studies that could be referenced here from observations or model e.g.(de Fleurian 2022, Ing 2024)
- Paragraph from Line 22 to 28: Here and later in the manuscript I do not understand the special focus on the SHAKTI model, looking through ISSM code I can count 7 implemented subglacial hydrology models of different complexity, I might have a bias in this matter as I am the author of one of those models but I think that this paragraph should be reworked. Also, in this paragraph GlaDS is missing a reference.
- Line 29 : missing a reference to Fischler.
- Line 34 : I suggest to rephrase with : "multiple properties, such as the time point for example...."
- Line 41 : "next" should be removed
Software :
- Figure 1: On the figure it is not completely clear what fields are actually used by the given models. For example I expect that the grounded ice melting rate is used as input for CUAS-MPI. That might be something that have been forgotten or just that I missed something in the design of the diagram.
- Section 2.1.3: It is stated here that the coupler has a fixed time window. Is that something that could evolve. I expect that for an efficient coupling and in the case of subglacial hydrology you would lie to adapt the time window's length depending on the season to catch the different variability of the subglacial drainage system.
- Line 105: The simulation codes (G4000, G250) are used here without references, that should be fixed.
- Line 110: The ordering of the figures is not respected with Figure 3 appearing before figure 2.
- Line 120: I think that "set" here is a better term than "aligned"
- Line 129: I am not sure that SHAKTI actually does subdivides time-steps (and I can not find where it would do it in the code), The Double Continuum approach in the other hand does it (in the src/c/core/hydrology_core.cpp file)
- Section 2.3: While I approve the authors for stressing the limitations of the coupler I feel that the presentation they chose is disrupting the flow of the paper and sometimes makes for quite awkward sections (2.3.6 for example) and that including the limitations as standard text would be easier to read.
- Line 180: Regarding the partial interfaces, I expect that this is also true for CUAS-MPI and that the hydrology model is also running on ocean and ice free nodes?
- Line 185: I am missing the point of this remark on 3D variables.
- Line 202: I might be missing the point here but I don't see how setting the initial condition is really a limitation. To me the initialisation is part of the modelling workflow and the user is expected to set those values.
- Section 2.3.6: As stated ISSM performs multiple iteration in a single coupling window. However I wonder what strategy is used to pass the data to the coupler and hence the coupled model. Is the data passed only a timestamp at the end of the coupling window, a mean other the coupling window or the full history. For ice dynamics with a quite slow response time that should not be a big issue but it can be problematic when dealing with the faster evolution of the subglacial drainage system.
- Line 220: Perhaps the advance function behaviour should be described more in depth here.
- Line 230: A reference to the original CUAS model is missing here.
- Line 263: Is there a reason to use the ice Thickness rather than the levelsets provided by ISSM to define the bndmask?
- Line 274 : "provide" in place of "provided"
- Paragraph line 276 to 279: You mean here that the forcing is splitted in surface input, friction generated water, GHF production of water and potentially other sources and that the coupling only provides friction? This sentence is not very clear and should be rephrased.
Experiments
- Line 281 : "a model of subglacial hydrology" should be replaced by "CUAS"
- Line 311: It should be added here that the mapping scheme is used to convert between grids.
Results:
- I think that the choice of results here is not the more informative. The difference map do not really give a good idea of what the fields really look like and that is an issue for the comprehension of the impact of the coupled model. I get that the scope is more to assess is the coupler is actually behaving properly but we would need to have a better understanding of the results for that. I would like to see a steady state field for the different variables, at the end of the spin-up or at the end of the coupled simulation to confirm that the coupled model is actually behaving in a proper way. It would also be interesting to see a time evolution of bulk variables to see if there are any transient changes. Connected to that last remark I wonder if the dynamic of the coupled system would be the same if the coupling window is changed?
- Line 322. Effective pressure is on panel 7b not 7a
- Line 325. I do not agree with the analysis of the results in the cold based region. To my eyes, the interior part of the cold based region shows a decrease in effective pressure while the outside parts shows an increase. without the original field it is hard to figure out what the effective pressure field of the coupled simulation looks like and if that result is reasonable. I also wonder why there is a quite large decrease in effective pressure and ice thickness at the grounding line of the domain. Is that due to a retreat of the grounding line or an error associated with the mapping of the grids?
- Line 328: I don't see a large interest in the anom. simulation. In my opinion it would be more interesting to give more details on the reference simulation and it's comparison with the uncoupled run. if the author decide to keep this simulation in the end I think that presenting it without a plot of the transient evolution is not very useful.
- Line 330: The Budd friction law given here is the good one the one on line 291 should be changed.
- 3.2 Performance : I expect that the new set-up is needed because the synthetic one is running to fast to be relevant to spot performance issue? If that is the case that should probably be stated here.
- Line 346: I am not sure that not restricting CUAS to warm-based ice is the good way to go. If I am not mistaking the effective pressure can be computed everywhere even if there is no water and then it just becomes the ice overburden pressure. In this case wouldn't it be better to run CUAS only on warm nodes and then update the effective pressure to be equal to overburden ice pressure on the frozen nodes. My main concern with that is that frozen region could actually have an impact on the simulated subglacial drainage system as they are acting as dams preventing the flow of water which could change the water pressure distribution at the base of the ice.
- Paragraph Line 361 to 363: This paragraph is not very clear and should be rephrased.
- Line 368. I don't see why serial and parallel coupling would reasonably have different results if everything else is equal. The only difference I can see is that the coupled scheme as some idling time for one of the solver but don't they get the same result from the other participant anyway? I feel that I am missing something here that should probably be explained better.
- Figure 9-10: It would be nice to have the simulation with 384 or 792 CPUs presented in both those figure to have a better idea of the difference between the two coupling schemes.
Discussion and Conclusion.
I think that a better point could be made to stress the pros an cons of this coupler against earth system specific couplers. The conclusion should also make a better job at synthesising the papers results rather than presenting future work. I also think that the reference to Keyes should probably appear in the introduction of the paper rather than here.- Line 415: I am probably biased here but I think that citing DOCO (deFleurian 2022) here would be more relevant as this model as actually been coupled to an ice dynamics model but I am not aware of an application where SHAKTI was coupled.
Citation: https://doi.org/10.5194/egusphere-2025-3345-RC2
Data sets
Coupling ISSM and CUAS-MPI: example cases Daniel Abele, Thomas Kleiner, Yannic Fischler, Angelika Humbert https://doi.org/10.5281/zenodo.15849146
Model code and software
ISSM-preCICE adapter Daniel Abele, Angelika Humbert https://doi.org/10.5281/zenodo.15785544
CUAS-MPI with adapter for the preCICE coupling library Yannic Fischler, Thomas Kleiner, Daniel Abele, Angelika Humbert https://doi.org/10.5281/zenodo.15782324
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
679 | 37 | 10 | 726 | 10 | 17 |
- HTML: 679
- PDF: 37
- XML: 10
- Total: 726
- BibTeX: 10
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1