the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Coupling of the Ice-sheet and Sea-level System Model (version 4.24) with hydrology model CUAS-MPI (version 0.1) using the preCICE coupling library
Abstract. Accurate earth system models must include interactions between atmosphere, ocean, and continental ice sheets. To build such models, numerical solvers that compute the evolution of the different components are coupled. There are frameworks and libraries for coupling that handle the complex tasks of coordinating the solver execution, communicating between processes, and mapping between different meshes. This allows solvers to be developed independently without compromises on numerical methods or technology. Code reuse is improved, both over large, monolithic software systems that reimplement each coupled model as well as over ad-hoc coupling scripts.
In this work, we use the preCICE coupling library to couple the Ice-sheet and Sea-level System Model (ISSM) with the subglacial hydrology model CUAS-MPI. An adapter for each model is required that passes the meshes and coupled variables between the model and preCICE. We describe the generic, reusable adapters we developed for both models and demonstrate their features experimentally. We also include computational performance results for the coupled system on a high-performance computing cluster. Coupling with preCICE has low computational overhead and does not negatively impact scaling. Therefore, the presented software facilitates studies of the subglacial hydrology systems of continental ice sheets as well as coupling ISSM or CUAS-MPI with other codes such as in global earth system models.
- Preprint
(5460 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-3345', Moritz Hanke, 13 Aug 2025
-
CC1: 'Reply on RC1, Request for Clarifications', Daniel Abele, 01 Sep 2025
Thank you for the detailed review, you make very good points.
At this point, I would only like to ask for clarification on a few of your comments.
MH: ""
Figure 9: Sentence "With few processes, ..."
I would assume that the idle time for the 768-process case is higher due to CUAS having better scaling properties than ISSM. With 192 processes you seem to have hit a sweet spot, where both models roughly require the same time for the simulation of one coupling window.
""I'm not sure whether this comment is asking for any specific revision. What you say is true. We used the 2:1 distribution precisely because it is optimal for one total CPU count. the rest of the analysis then looks at what happens if we use the same distribution for other CPU counts or whether a different distribution works better. Does this need to be stated more clearly in the text?
MH: ""
Section Discussion:
If I am not mistaken, your adapter design should allow for a single executable setup using serial coupling, in which ISSM is called by the CUAS-preCICE adapter within the CUASSolver. This could significantly improve performance of the serial setup. Did you consider this option? Maybe you can a discussion about this.
""I don't think I understand what you mean, could you please clarify how you think ISSM would be integrated and how mapping/communication is done (obviously only on a conceptual level, not the technical details)? preCICE does not have an API to map data directly, one adapter writes data, the other adapter reads, everything else is internal to preCICE. So integration into one executable doesn't change anything. And even if it was possible, the same operations (communication + mapping) would be performed, so I don't see significant gains there. Integration also brings with it software engineering challenges that are avoided through separate executables.
Citation: https://doi.org/10.5194/egusphere-2025-3345-CC1 -
AC1: 'Reply to review by M. Hanke (RC1)', Daniel Abele, 14 Nov 2025
We want to thank Moritz Hanke for his detailed review and suggestions. We first summarize the planned revisions to the paper. This summary is identical for both reviewers since we feel that they agree on the main issues.
We will
- extend the discussion of different (types of) coupling libraries, in particular the pros and cons of using a generic library like preCICE over earth system specific libraries as suggested by the reviewers. This allows readers to better judge the advantages and disadvantages of our solution.
- change the experiment of synthetic setup in Sect. 3.1. to better show the effect of coupling. We will compare coupled and uncoupled runs with focus on the transient behavior to see that feedback between the models is correctly propagated.
- add more detailed investigation of the initialization phase to the performance analysis in Sect. 3.2. We will also describe our process and choices in these experiments more clearly.
- improve the focus of the manuscript on the technical implementation by more clearly stating the goals and results in the introduction, abstract, title and conclusion.
In the following, we answer the specific comments that have been made. In bold font, we repeat verbatim the comment from the reviewer. Continuing in normal font, we give our reply and planned revisions. In some cases, the order of reviewer comments has been changed to reply to multiple related comments at the same time.
Specfic comments:
Abstract:
Please formulate the goals and achievements of the work presented by this paper more clearly.As stated above
Line 1: "Accurate earth system models"
Is there such a thing as an accurate Earth system model?You are right that this wording is too strong. We will rephrase this in the revised version.
Line 30-31: "on different spatial and temporal discretizations, which complicates the coupling"
Isn't handling these issues one of the main functions of a coupler?Your assertion is correct. The cited statement is to justify the use of a coupling library, in that without different discretizations a library would be needed less. This section will be rephrased in the revised manuscript.
Line 34: "time points where"
Maybe: "time intervals at which"You are correct that the interval configured, while the time points are implicitly derived. This will be rephrased in the revised manuscript.
Line 35: Repeated start of a sentence
Changed.
Line 35-36: "provides sophisticated numerical methods"
Which "numerical methods" do you mean (e.g. implicit vs explicit time stepping, remapping methods, ...)? Please be more specific.We refer here to the methods that you mentioned, as well as additional ones like time interpolation. We will explicitly state this here in the revised manuscript. The methods are described in more detail in the section on preCICE.
Line 37-38: "easy to either add component like an existing ocean circulation model"
I do not think, that this is the case. Reasons will be discussed further below.You are correct that easy is not the right term. Coupling of a global circulation model would of course run into problems with incompatible coordinate systems as you mention elsewhere. Local ocean models would not have that problem. Other complexities remain that cannot be handled generically. For example, input variables for one model must be derived from output variables of the other model if they don’t fit exactly. We describe how we do this in the section on the CUAS adapter. We argue that the adapters presented in our submission reduce the developmental effort for many use cases, but we do not wish to imply that it’s trivial.
We will rephrase this statement here to make our meaning clear. In addition, above points will be added to the discussion section.
Line 41 and 42: "Sect. 3"/"Sect. 4"
No abbreviation for "Section 2" was used.Changed.
Line 46: "overview of the coupling"
Maybe: "overview of the coupling setup"Changed.
Line 46: "existing codes"
Maybe: "existing three codes"Changed.
Line 50: Add long form of preCICE abbreviation
Changed. (adding “Precise Code Interaction Coupling Environment”, https://precice.org/docs.html)
Line 51-52: "handles communication, data mapping, and coordination of the solvers"
Maybe: "handles communication, data mapping, and coordination between the solvers" to explicitly exclude communication like halo/ghost cell exchanges within a solver.By coordination, we mean here the temporal order of computation (coupling intervals, implicit vs. explicit, serial vs. parallel schemes). Your suggestion is still good, but we think “between solvers” fits better earlier in the list after “communication”. We will rephrase this sentence to both include your suggestion and clarify the meaning.
Line 53: "calls the preCICE library"
Maybe something like: "call initialisation routine of ..." would be more specific.We refer here to all calls made by the solver to preCICE, not just initialization. It is always the solver that calls preCICE. The confusion probably comes from the start of the quoted sentence, which reads “To start a coupled simulation…”. We will rephrase this.
Line 53: "all options"
Maybe: "all preCICE configuration options"Changed.
Line 53-54: "This approach requires..."
In my opinion this is something for the conclusion.This is intended as a justification for the design of preCICE in general. We will rephrase this section to make clear that this is a goal and discuss later whether the goal was achieved in our specific application.
Line 54-55: "the respective algorithms are selected"
Be more specific on the algorithms. Which task will they perform.Changed to add coupling scheme and data mapping.
Paragraph 2.1:
Could you add more information about the adapters? Are they part of the solver, preCICE or independent codes. Who usually implements them (preCICE or model developers)?We have a short discussion of this later in the paper when describing the ISSM adapters. You are right that this is missing a few details and is better included here. We will add these points here and refer back to them later in the text.
Line: 62-63: Sentence "To establish the communication ..."
To me these are implementation details not relevant for this paper. How about:
"Communication channels between the processes of both solvers are established using a highly scalable algorithm (Totounferoush et al., 2021)."We agree that the focus here should be on the advantage for the user instead of implementation details and rephrase this accordingly.
Line 70-71: Sentence : "However, if we wanted..."
This might be more suited for the discussion.You are correct that statement does not fit here in its current form. We will hopefully be able to reference a new publication in the revised manuscript with a first evaluation of using preCICE for data mapping of spherical grids.
Paragraph 2.1.4
Could be part of the introduction as part of "state-of-the-art".Line 89-90: Sentence "One of the goals..."
Should be mentioned in the abstract and in my opinion would fit better in the introduction section.This paragraph and the quoted line discuss alternatives to preCICE. We agree this is more relevant in the introduction (and abstract for the stated goal). In addition, we refer back to it in the discussion.
Line 133-134: Sentence "This could be resolved..."
Is this relevant for the paper?This sentence is intended as further justification for the development of the adapters we present here. An alternative to the new development would be integrating CUAS into ISSM. Flexible decomposition is easier (but not impossible) with independent solvers coupled with preCICE (or another coupling library). We use this in some of the performance experiments in this paper, where CUAS is allotted fewer CPUs than ISSM in a parallel coupling scheme.
You are right that this is not clear from the text, and we will rephrase it accordingly. In addition, we will add this justification to the introduction where it is not explicitly mentioned yet.
L149-152: Sentence "There are different...."
This general description of adapters could be included in section 2.1As mentioned above, this will be moved to the suggested place and referred back to here.
L164: "linear cell interpolation"
Add reference to clarify its meaning.Changed by adding short description here and
L179: "Partial interfaces"
In Earth system modelling, I would consider the term "masking" more appropriate.You are right, we will change this to use the established term.
L284: Section 3.1
Add minimal text to introduce this section.Changed.
Figure 6:
What additional information is provided by this figure when compared to Figure 1?It is a presentation that is more familiar to users of preCICE. It is auto-generated by preCICE tools and adds some details about the precise data mapping configuration, in particular whether mapping is performed on read or on write. We will expand the description of the figure to include this information.
Section 3.1.2 Results:
If possible, this section could be improved by evaluating the results in terms of quality of simulation results: Does the coupled setup produce more accurate results than the uncoupled ones?There are no known or analytic results for the synthetic setup, so it’s not possible to quantify absolute accuracy. A real-world setup could allow comparison to measured data, but the setup (beyond what is necessary for the performance experiments) is significant modelling effort and outside the scope of this technical paper.
But as you suggested, we will add comparison between a coupled and uncoupled synthetic setup in the revised manuscript. This will at least show the effect of the missing feedback in the uncoupled setup.
Line 382: Sentence "This also includes..."
The imbalance between solver initialisation times could be explicitly measured by introducing an additional synchronisation point.This data is available from preCICE profiling but not explicitly discussed in the manuscript. It is possible to estimate wait times from figures 9 and 10. We will add explicit values in the revised manuscript.
Line 383-386: whole paragraph
Maybe simplify by: "3. Running the solver for one coupling window."You are right that the paragraph should be simplified. However, we think it is important to mention that subcycling in time does not have significant computational overhead. The paragraph will be rephrased accordingly, using your suggestion as an opening summary.
Line 390: "i.e., everything that happens before the participants or the first participant in a serial coupling"
This part is unclear to me and looks incomplete. "everthing that happens before the participants [..] in a serial coupling" do what?You are right, the sentence is badly worded and also missing specificity. We will rephrase it, adding specific parts included in initialization: loading data, building solver and coupler data structures, precomputing data mappings, etc.
Line 389-393: whole paragraph
Initially it was not clear to me, that in both cases (serial and parallel coupling) both models have their own dedicated processes, but that in the serial case the two models are allocated to the same resources, while in the parallel case have each have their own dedicated computing resources. This could potentially be phrased more clearly.We agree, this will be stated more clearly in the revised manuscript.
[Line 389-393: whole paragraph, cont’d]
In MPI receive operations are often implemented as busy-loops that wait until the requested data is available. If the advance step contains an MPI receive operation, while waiting for the result from ISSM, CUAS might generate a significant load on the shared resources in the serial case. Therefore, a potentially interesting experiment could be the comparison of a setup with serial coupling using shared or dedicated resources.We ran experiments and have not found this to be a problem, at least on our system. Only initialization is affected, since both solvers initialize at the same time on contested hardware resources. This can already be seen by comparing serial and parallel coupling. Note that preCICE offers two different modes of communication, MPI and sockets. We have used socket mode in our experiments.
We will include these points and the results of our experiment in the revised manuscript.
Line 394-397: whole paragraph
Does this represent a processes which in climate science is often refereed to as "spin-up"? Could this be avoided by initialising the models using restart files from a previous run?In our current process, both models first perform spin-up uncoupled. This produces setups that are not entirely compatible. A period of relaxation at the beginning of coupling is probably unavoidable. This could be called coupled spin-up.
Due to limited investigation of the model results of the Greenland setup in this paper, we cannot confidently discuss this effect. Depending on which model parameters change between runs, it would certainly be possible to start from the relaxed state that was precomputed. For these reasons we mostly limit our discussion to later coupling windows when execution times have stabilized.
We will add this point to the revised manuscript.
Figure 9: Sentence "With few processes, ..."
I would assume that the idle time for the 768-process case is higher due to CUAS having better scaling properties than ISSM. With 192 processes you seem to have hit a sweet spot, where both models roughly require the same time for the simulation of one coupling window.This is true, but this is not accidental. We are using a the distribution of 2:1 for CPUs assigned to ISSM and CUAS precisely because first experiment results with low CPU counts (one or two cluster nodes) suggested this as a sweet spot. We scaled up from there to see if the factor should be changed for larger numbers of CPUs. The results show increasing wait times, but adjusting the CPU distribution to minimize wait times does not significantly improve the runtime anyway.
We will state more clearly in the manuscript why we have chosen the distributions of 2:1 and 2.7:1.
Line 401-403: whole paragraph
Do solver iteration counts differ between serial and parallel coupling? Could you quantify the differences between the execution time of a coupling window of both cases? Can these differences exclusively be explained by the imbalance between the two solvers, which can be measured by the "advance" timer?Iteration counts of the solvers (nonlinear solvers in stress balance/thermal core in ISSM, linear solver in CUAS) do not differ significantly, execution times for coupling windows is basically the same. We will add data to show this in the revised manuscript.
Figure 12:
For 192 processes ISSM has an execution time for a single coupling window of around 450 s and CUAS 200 s. However in Figure 9 both models seem to have for the same number of processes and each an execution time of around 650 s. Did I misinterpret the figures or how do you explain the inconsistency between the two figures?Our figures are not clearly labelled here. In Fig. 9, 10 and 11, the CPU counts refer to the combined count for both ISSM and CUAS. In Fig. 12 and 13, the CPU counts refer to the count for a single model. In Fig. 9, ISSM is assigned 128 CPUs, CUAS is assigned 64 CPUs. Fig. 12 does not include a measurement for CUAS with 64 CPUs, but the measurement for ISSM with 128 CPUs is consistent with the execution times in Fig. 9. All figures are based on the same measurements.
We will improve the figure labels in the revised manuscript and add the measurement for CUAS with 64 CPUs.
[Figure 12, cont’d]
Based on the measurements, I would assume that a parallel coupling setup with 512 processes for ISSM and 192 processes for CUAS should perform quite well. Do you agree? Why did you not use similar setups for the measurements of Figure 9?
I personally would start the performance analysis with this data and base the other measurements on its results.We did basically what you suggested. First experiments with 192 total CPUs suggested a distribution of 2:1 to be optimal. For higher numbers of CPUs, the wait times added overhead, so we looked at the execution times of each solver to calculate a distribution that would minimize wait times. You will find that your suggested distribution of 512:192 is very close to the value of 2.7:1 that we used. While this almost eliminated wait times (see Fig. 13), total runtime was only marginally improved (Fig. 11).
As mentioned above, we will state more clearly in the manuscript why we have chosen the distributions of 2:1 and 2.7:1.
Figure 13:
Does this again contradict the measurements from Figure 9?As explained above, labels in the figures will be improved to display the number of CPUs in a clear and consistent way.
Line 407-412: whole paragraph
Instead of defining a process count ratio, wouldn't it be easier to determine the number of processes for each component for a fixed coupling window execution time based on the measurements from Figure 12?As explained above, this is basically what we did.
Section 3.2.2 Results
The remapping and communication time between two coupling windows usually is negligible in climate setups. However, initialisation time can have a significant impact on the overall runtime and this is a factor, which is often of interest to users. Therefore, this should be analysed in more detail. (See [1])Line 420-425: whole paragraph
Please include an analysis of preCICE's impact on initialisation time.A large part of initialization is IO, which exhibits far less consistent measured execution times for different clusters. We believe, the value of detailed analysis is limited. But we agree that it should not be ignored and we will add measurements of initialization times to the manuscript, in particular the time required to initialize preCICE as you request.
Using exclusive instead of shared nodes even for serial coupling has a large impact on initialization times. As mentioned above, we will add this comparison to the manuscript.
Line 416-419. "CUAS and ISSM use [...] are easy to adapt."
This are properties which in my opinion are not unique to preCICE but shared with other coupling solutions. This could be clarified in the discussion.You are correct, and we will add this to our discussion.
Line 425: Sentence "This imbalance..."
Can you explain why it is not possible in this case?We tried to do fix the imbalance by adjusting the distribution of CPUs but did not have positive results as explained above. We will rephrase the statement to make clear that it is based on our experiments.
Line 440-441: Sentence "For example, the ice sheet code..."
I was told that this example does not make much sense, since hydrology is already included in Elmer/Ice [3].Most ice sheet models include at least some hydrology model. We mention in the introduction that both ISSM and Elmer/Ice include multiple different hydrology models already. Models have advantages and disadvantages and it is worthwhile to compare them in the same context with the same ice sheet model. Coupling adapters makes this easier, since not every hydrology model has to be integrated into every ice sheet model. All the reasons for coupling CUAS with ISSM (allow different grid resolutions, independent development, …) apply to Elmer/Ice as well. We did not intend to imply that hydrology would be new to Elmer/Ice. In the revisions, we will put more emphasis on the value of diverse hydrology models.
Section Discussion:
If I am not mistaken, your adapter design should allow for a single executable setup using serial coupling, in which ISSM is called by the CUAS-preCICE adapter within the CUASSolver. This could significantly improve performance of the serial setup. Did you consider this option? Maybe you can a discussion about this.This is worth thinking about. However, preCICE is not intended to be used in this way, nor does it provide the necessary API. One adapter writes data, the other adapter reads it. Since the computation performed by each model is still the same and overhead for data mapping is small, we would not expect performance improvements that justify the increased development effort. It would also be a less flexible/extensible solution, e.g., we cannot easily add more participants.
[Section Discussion, cont’d]
I think in order to improve the scientific value of the paper you should extend the discussion on pros and cons of using a generic coupling library compared to one specialised for Earth system modelling. In case you are interested here are a few points:* Usually specialised couplers like OASIS3-MCT or YAC basically provide the coupling of 2D fields on a spherical surface. For global circulation models that simulate for example an atmosphere or an ocean, this constraint is adequate. For the coupling of these models specialised couplers are probably the best choice since they also provide user interfaces and remapping options that are optimised for these cases. In addition, these models have to my knowledge not the required software infrastructure to efficiently support implicit coupling provided by preCICE.
* For the remapping of properties like surface freshwater flux conservative remapping as described in [2] is used. This remapping scheme requires the usage of spherical geometry during the interpolation weight computation. This is currently not supported by preCICE and its support could become difficult. This could be overcome by supporting a remapping scheme based on user-provided weight files. These weight files could be produced in a preprocessing step by specialised tool (e.g. CDO, ESMF, or YAC).
* I think the best use-case for a general coupling library in earth system modelling are cases where the constrain to the coupling of 2D spherical fields is insufficent (e.g. glacier ground interaction as described in this paper). It could even be feasible and interesting to set up configuration where coupling between ISSM and CUAS is implemented using a general coupler like preCICE and in the same setup additional coupling with an atmosphere and or ocean model is implemented with a specialised coupler.Line 455-456 Sentence "The use of preCICE..."
See comment above.As stated above, we will expand the discussion to include a comparison of (types of) coupling libraries. Your points are highly appreciated and merit discussion. We will reference a new publication with first results towards using preCICE to map grids in spherical coordinate systems.
Line 465: "https://git.rwth-aachen.de/yannic.fischler/cuas-mpi"
The associated repository is not publicly available. The link to the version used for the paper is available, so removing the link to the current version would not be a problem.We mixed up the URLs of our private and public repositories. We will replace it with a link to the public repository, https://github.com/tudasc/CUAS-MPI.
References [by Moritz Hanke, included for completeness] :
[1]: https://raw.githubusercontent.com/IS-ENES3/IS-ENES-Website/main/pdf_documents/IS-ENES2_D10.3_FV.pdf
[2]: https://data-ww3.ifremer.fr/TRAINING/DOC/SCRIPusers.pdf
[3]: https://kannu.csc.fi/s/6CRGEdSZPEajnL6Citation: https://doi.org/10.5194/egusphere-2025-3345-AC1
-
CC1: 'Reply on RC1, Request for Clarifications', Daniel Abele, 01 Sep 2025
-
RC2: 'Comment on egusphere-2025-3345', Basile de Fleurian, 04 Sep 2025
This study highlights the use of a generic coupling library (preCICE) to perform simulations coupling subglacial hydrology through CUAS-MPI and ice dynamics through ISSM. The study presents the different components hat were developed to achieve the coupling, namely an adapter for the communication between ISSM and preCICE and a coupler implemented in CUAS-MPI responsible of the communication between this model and preCICE. A few simulations are showcased to confirm the proper behaviour of the coupled model and to evaluate the efficiency of the coupling.
The focus of the study is mostly on the technical aspect relative to the coupling and its efficiency rather than the analysis of the results of the coupled simulation. Following that objective, the authors presented an extensive description of the software needed to achieve the coupling without delving in the results of the simulations they performed. I can understand the more technical approach but then I feel that the study is missing a proper comparison with existing couplers that could inform users on a choice of tool for future coupling. Rather the focus is toward using a coupler or integrating a solver in a larger numerical model which I feel is a different consideration.
I acknowledge the use of this study to introduce preCICE as a new coupling tool in the earth system community. However I feel that to give a proper advertising of the coupler, either some more substantial scientific results should be showcased or the author should make a clearer point about the advantages and drawbacks of using this specific coupler.
Specific comments:
Title:Depending on the focus of the final paper I think that the title should be modified to be a better reflection of the content of the study. In the current version I would expect a study about the results of the coupling itself rather than a presentation of the coupler. I would also exclude version numbers from the title and have them later in the text to avoid cluttering the title.
Introduction:
I noted a few places with missing references, I try to list them in the comments bellow but might forget some and advise the author to take specific care about that point.
- Line 19: The sentence on starting : "While the hydrological..." is not extremely clear to me, I would suggest to rephrase with something like : "While the hydrological system evolves on long time scales in the centre of ice sheets, at the margins and particularly in Greenland its evolution is faster especially during the melt season"
- Line 20 : Emphasis should be made here on the fact that the interaction between subglacial hydrology and ice dynamics are complex, there are a few studies that could be referenced here from observations or model e.g.(de Fleurian 2022, Ing 2024)
- Paragraph from Line 22 to 28: Here and later in the manuscript I do not understand the special focus on the SHAKTI model, looking through ISSM code I can count 7 implemented subglacial hydrology models of different complexity, I might have a bias in this matter as I am the author of one of those models but I think that this paragraph should be reworked. Also, in this paragraph GlaDS is missing a reference.
- Line 29 : missing a reference to Fischler.
- Line 34 : I suggest to rephrase with : "multiple properties, such as the time point for example...."
- Line 41 : "next" should be removed
Software :
- Figure 1: On the figure it is not completely clear what fields are actually used by the given models. For example I expect that the grounded ice melting rate is used as input for CUAS-MPI. That might be something that have been forgotten or just that I missed something in the design of the diagram.
- Section 2.1.3: It is stated here that the coupler has a fixed time window. Is that something that could evolve. I expect that for an efficient coupling and in the case of subglacial hydrology you would lie to adapt the time window's length depending on the season to catch the different variability of the subglacial drainage system.
- Line 105: The simulation codes (G4000, G250) are used here without references, that should be fixed.
- Line 110: The ordering of the figures is not respected with Figure 3 appearing before figure 2.
- Line 120: I think that "set" here is a better term than "aligned"
- Line 129: I am not sure that SHAKTI actually does subdivides time-steps (and I can not find where it would do it in the code), The Double Continuum approach in the other hand does it (in the src/c/core/hydrology_core.cpp file)
- Section 2.3: While I approve the authors for stressing the limitations of the coupler I feel that the presentation they chose is disrupting the flow of the paper and sometimes makes for quite awkward sections (2.3.6 for example) and that including the limitations as standard text would be easier to read.
- Line 180: Regarding the partial interfaces, I expect that this is also true for CUAS-MPI and that the hydrology model is also running on ocean and ice free nodes?
- Line 185: I am missing the point of this remark on 3D variables.
- Line 202: I might be missing the point here but I don't see how setting the initial condition is really a limitation. To me the initialisation is part of the modelling workflow and the user is expected to set those values.
- Section 2.3.6: As stated ISSM performs multiple iteration in a single coupling window. However I wonder what strategy is used to pass the data to the coupler and hence the coupled model. Is the data passed only a timestamp at the end of the coupling window, a mean other the coupling window or the full history. For ice dynamics with a quite slow response time that should not be a big issue but it can be problematic when dealing with the faster evolution of the subglacial drainage system.
- Line 220: Perhaps the advance function behaviour should be described more in depth here.
- Line 230: A reference to the original CUAS model is missing here.
- Line 263: Is there a reason to use the ice Thickness rather than the levelsets provided by ISSM to define the bndmask?
- Line 274 : "provide" in place of "provided"
- Paragraph line 276 to 279: You mean here that the forcing is splitted in surface input, friction generated water, GHF production of water and potentially other sources and that the coupling only provides friction? This sentence is not very clear and should be rephrased.
Experiments
- Line 281 : "a model of subglacial hydrology" should be replaced by "CUAS"
- Line 311: It should be added here that the mapping scheme is used to convert between grids.
Results:
- I think that the choice of results here is not the more informative. The difference map do not really give a good idea of what the fields really look like and that is an issue for the comprehension of the impact of the coupled model. I get that the scope is more to assess is the coupler is actually behaving properly but we would need to have a better understanding of the results for that. I would like to see a steady state field for the different variables, at the end of the spin-up or at the end of the coupled simulation to confirm that the coupled model is actually behaving in a proper way. It would also be interesting to see a time evolution of bulk variables to see if there are any transient changes. Connected to that last remark I wonder if the dynamic of the coupled system would be the same if the coupling window is changed?
- Line 322. Effective pressure is on panel 7b not 7a
- Line 325. I do not agree with the analysis of the results in the cold based region. To my eyes, the interior part of the cold based region shows a decrease in effective pressure while the outside parts shows an increase. without the original field it is hard to figure out what the effective pressure field of the coupled simulation looks like and if that result is reasonable. I also wonder why there is a quite large decrease in effective pressure and ice thickness at the grounding line of the domain. Is that due to a retreat of the grounding line or an error associated with the mapping of the grids?
- Line 328: I don't see a large interest in the anom. simulation. In my opinion it would be more interesting to give more details on the reference simulation and it's comparison with the uncoupled run. if the author decide to keep this simulation in the end I think that presenting it without a plot of the transient evolution is not very useful.
- Line 330: The Budd friction law given here is the good one the one on line 291 should be changed.
- 3.2 Performance : I expect that the new set-up is needed because the synthetic one is running to fast to be relevant to spot performance issue? If that is the case that should probably be stated here.
- Line 346: I am not sure that not restricting CUAS to warm-based ice is the good way to go. If I am not mistaking the effective pressure can be computed everywhere even if there is no water and then it just becomes the ice overburden pressure. In this case wouldn't it be better to run CUAS only on warm nodes and then update the effective pressure to be equal to overburden ice pressure on the frozen nodes. My main concern with that is that frozen region could actually have an impact on the simulated subglacial drainage system as they are acting as dams preventing the flow of water which could change the water pressure distribution at the base of the ice.
- Paragraph Line 361 to 363: This paragraph is not very clear and should be rephrased.
- Line 368. I don't see why serial and parallel coupling would reasonably have different results if everything else is equal. The only difference I can see is that the coupled scheme as some idling time for one of the solver but don't they get the same result from the other participant anyway? I feel that I am missing something here that should probably be explained better.
- Figure 9-10: It would be nice to have the simulation with 384 or 792 CPUs presented in both those figure to have a better idea of the difference between the two coupling schemes.
Discussion and Conclusion.
I think that a better point could be made to stress the pros an cons of this coupler against earth system specific couplers. The conclusion should also make a better job at synthesising the papers results rather than presenting future work. I also think that the reference to Keyes should probably appear in the introduction of the paper rather than here.- Line 415: I am probably biased here but I think that citing DOCO (deFleurian 2022) here would be more relevant as this model as actually been coupled to an ice dynamics model but I am not aware of an application where SHAKTI was coupled.
Citation: https://doi.org/10.5194/egusphere-2025-3345-RC2 -
AC2: 'Reply to review by B. de Fleurian (RC2)', Daniel Abele, 14 Nov 2025
We want to thank Basile de Fleurian for their detailed review and suggestions. We first summarize the planned revisions to the paper. This summary is identical for both reviewers since we feel that they agree on the main issues.
We will
- extend the discussion of different (types of) coupling libraries, in particular the pros and cons of using a generic library like preCICE over earth system specific libraries as suggested by the reviewers. This allows readers to better judge the advantages and disadvantages of our solution.
- change the experiment of synthetic setup in Sect. 3.1. to better show the effect of coupling. We will compare coupled and uncoupled runs with focus on the transient behavior to see that feedback between the models is correctly propagated.
- add more detailed investigation of the initialization phase to the performance analysis in Sect. 3.2. We will also describe our process and choices in these experiments more clearly.
- improve the focus of the manuscript on the technical implementation by more clearly stating the goals and results in the introduction, abstract, title and conclusion.
In the following, we answer the specific comments that have been made. In bold font, we repeat verbatim the comment from the reviewer. Continuing in normal font, we give our reply and planned revisions. In some cases, the order of reviewer comments has been changed to reply to multiple related comments at the same time.
Specific comments:
Title:
Depending on the focus of the final paper I think that the title should be modified to be a better reflection of the content of the study. In the current version I would expect a study about the results of the coupling itself rather than a presentation of the coupler. I would also exclude version numbers from the title and have them later in the text to avoid cluttering the title.We will change the title to better reflect the technical nature of the paper (along the lines of “Development and Performance Analysis of coupling adapters for …”, but not final). Version numbers of models are generally required by the journal, but we will clarify with the editors what is appropriate in this case.
Introduction:- Line 19: The sentence on starting : "While the hydrological..." is not extremely clear to me, I would suggest to rephrase with something like : "While the hydrological system evolves on long time scales in the centre of ice sheets, at the margins and particularly in Greenland its evolution is faster especially during the melt season"
We will rephrase this.
- Line 20 : Emphasis should be made here on the fact that the interaction between subglacial hydrology and ice dynamics are complex, there are a few studies that could be referenced here from observations or model e.g.(de Fleurian 2022, Ing 2024)
This is true, and strengthens the need for a robust coupling solution. The references are appreciated.
- Paragraph from Line 22 to 28: Here and later in the manuscript I do not understand the special focus on the SHAKTI model, looking through ISSM code I can count 7 implemented subglacial hydrology models of different complexity, I might have a bias in this matter as I am the author of one of those models but I think that this paragraph should be reworked. Also, in this paragraph GlaDS is missing a reference.
SHAKTI is only used as one example of a hydrology model that is integrated in ISSM, but you are right that it does not have a special significance beyond that or over other models integrated into ISSM. We will make sure to clarify this in the revised manuscript and add more diverse examples as well as the missing reference.
- Line 29 : missing a reference to Fischler.
Changed.
- Line 34 : I suggest to rephrase with : "multiple properties, such as the time point for example...."
This sentence was also mentioned by reviewer Moritz Hanke. We will rephrase, taking into account both suggestions.
- Line 41 : "next" should be removed
Changed.
Software :
- Figure 1: On the figure it is not completely clear what fields are actually used by the given models. For example I expect that the grounded ice melting rate is used as input for CUAS-MPI. That might be something that have been forgotten or just that I missed something in the design of the diagram.
All data sent by one model to the other is used as input by the receiving model, either directly or indirectly after some transformation that is described in the text. Will make this clear in the revised manuscript.
- Section 2.1.3: It is stated here that the coupler has a fixed time window. Is that something that could evolve. I expect that for an efficient coupling and in the case of subglacial hydrology you would lie to adapt the time window's length depending on the season to catch the different variability of the subglacial drainage system.
The window was chosen conservatively and will be adapted when necessary for running real simulations. preCICE does not currently support adaptive coupling windows. Performance experiments (also mentioned by reviewer M. Hanke) show that data mapping is not a significant overhead, so we would expect coupling windows can be made significantly shorter in summer without much unnecessary overhead in winter. Also note that the solvers are allowed to do smaller time steps than the coupling window.
- Line 105: The simulation codes (G4000, G250) are used here without references, that should be fixed.
We rephrase this statement to make clear that the codes refer to setups used in the publication referenced at the end of the statement.
- Line 110: The ordering of the figures is not respected with Figure 3 appearing before figure 2.
We don’t see this in our manuscript. We are not sure what is meant here, as none of the mentioned figures are around line 110. However, layout is certain to change during revisions and we will make sure the figures are in the right order.
- Line 120: I think that "set" here is a better term than "aligned"
Changed.
- Line 129: I am not sure that SHAKTI actually does subdivides time-steps (and I can not find where it would do it in the code), The Double Continuum approach in the other hand does it (in the src/c/core/hydrology_core.cpp file)
You are correct, this seems to be a mix up. The point remains that ISSM cores can in principle subdivide time steps. We will correct the statement in the revised manuscript.
- Section 2.3: While I approve the authors for stressing the limitations of the coupler I feel that the presentation they chose is disrupting the flow of the paper and sometimes makes for quite awkward sections (2.3.6 for example) and that including the limitations as standard text would be easier to read.
We see your point, and also that supported features and limitations are not easily described separately in the way that we attempted. The structure will be reconsidered.
- Line 180: Regarding the partial interfaces, I expect that this is also true for CUAS-MPI and that the hydrology model is also running on ocean and ice free nodes?
This is correct. We will add this to the revised manuscript.
- Line 185: I am missing the point of this remark on 3D variables.
These extrusion and depth-averaging features are mentioned here because they are supported by the coupling adapter. We will rephrase this statement to make clear that the features were added to the adapter and the statement does not refer to the existing features in ISSM.
- Line 202: I might be missing the point here but I don't see how setting the initial condition is really a limitation. To me the initialisation is part of the modelling workflow and the user is expected to set those values.
By default, preCICE assumes that all exchanged data is initially zero everywhere. This is rarely a valid assumption for earth system models, and we wanted to highlight this for readers familiar with preCICE but not earth system models. In addition, some ISSM variables are not required to have real initial values since they are normally computed by one ISSM core before they are used by other cores. The adapter requires all coupled variables to be properly initialized because the first set of variables is sent before any computation is performed.
Limitation may the wrong word here. As mentioned above, the structure of these paragraphs will be changed. We will state these points more clearly in the revised manuscript.
- Section 2.3.6: As stated ISSM performs multiple iteration in a single coupling window. However I wonder what strategy is used to pass the data to the coupler and hence the coupled model. Is the data passed only a timestamp at the end of the coupling window, a mean other the coupling window or the full history. For ice dynamics with a quite slow response time that should not be a big issue but it can be problematic when dealing with the faster evolution of the subglacial drainage system.
Models normally write and read a single snapshot of the coupled variables at the end of the coupling window. Using subcycling in combination with the time interpolation feature of preCICE, the history of the variables over the coupling window can be used, at least in the serial coupling scheme. We are not currently using this, but you are correct that it may be necessary for real simulations and the adapters can be extended to support this. However, before that the size of coupling windows can also be decreased. We will add these points to the manuscript.
- Line 220: Perhaps the advance function behaviour should be described more in depth here.
You are right. The “advance” function includes exchanging and mapping of data and advances the time. In addition, it may block to wait if the other solver has not finished its computation yet. We will describe this in the revised manuscript.
- Line 230: A reference to the original CUAS model is missing here.
Changed.
- Line 263: Is there a reason to use the ice Thickness rather than the levelsets provided by ISSM to define the bndmask?
We want the grounding line (GL) in CUAS to be consistent with the bed topography. The mesh resolution in ISSM at a given location could be much coarser than the grid resolution in CUAS. Via the preCICE coupler, we only obtain the interpolated level-set field representing the GL location on the coarser ISSM mesh, but this might not correspond to the GL position on the higher-resolution bed topography in CUAS.
We will mention the reasoning in the revised manuscript.
- Line 274 : "provide" in place of "provided"
Changed.
- Paragraph line 276 to 279: You mean here that the forcing is splitted in surface input, friction generated water, GHF production of water and potentially other sources and that the coupling only provides friction? This sentence is not very clear and should be rephrased.
In the coupling setup, we use the basal melt generated in ISSM due to frictional heat, geothermal heat, and all other water sources that ISSM considers. In addition to basal melt from ISSM, we allow for water from the ice surface (runoff) as a source of water (forcing) for CUAS. ISSM has no mechanism to do so.
We will rephrase this part.
Experiments:
- Line 281 : "a model of subglacial hydrology" should be replaced by "CUAS"
Changed.
- Line 311: It should be added here that the mapping scheme is used to convert between grids.
Changed.
Results:
- I think that the choice of results here is not the more informative. The difference map do not really give a good idea of what the fields really look like and that is an issue for the comprehension of the impact of the coupled model. I get that the scope is more to assess is the coupler is actually behaving properly but we would need to have a better understanding of the results for that. I would like to see a steady state field for the different variables, at the end of the spin-up or at the end of the coupled simulation to confirm that the coupled model is actually behaving in a proper way. It would also be interesting to see a time evolution of bulk variables to see if there are any transient changes. Connected to that last remark I wonder if the dynamic of the coupled system would be the same if the coupling window is changed?
The main focus of the manuscript is on the technical details of the implementation and its performance. However, we agree that showing only the differences with respect to a reference field (Fig.7) is not very convincing. As both reviewers requested more details, we have updated our synthetic setup, mostly regarding the forcing towards a greater impact on both models.
We will extend the results section to address the reviewers' points. We will add figures and discussion of feedbacks and the transient behaviour of the system, including the effect of the coupling window size. This may only give hints for future use. In-depth analysis will require a full real-world scenario that is out of scope of this work.
- Line 322. Effective pressure is on panel 7b not 7a
Changed.
- Line 325. I do not agree with the analysis of the results in the cold based region. To my eyes, the interior part of the cold based region shows a decrease in effective pressure while the outside parts shows an increase. without the original field it is hard to figure out what the effective pressure field of the coupled simulation looks like and if that result is reasonable. I also wonder why there is a quite large decrease in effective pressure and ice thickness at the grounding line of the domain. Is that due to a retreat of the grounding line or an error associated with the mapping of the grids?
Indeed, our statement was very incomplete. The interior part of the cold-based region shows a decrease in effective pressure, while the outer parts show an increase. In the revised version, we will show differences only on the mask, where both simulations have grounded ice to address the differences in grounding line position in the figure and rephrase the text accordingly. As mentioned above, we will also show the fields and not just the differences.
- Line 328: I don't see a large interest in the anom. simulation. In my opinion it would be more interesting to give more details on the reference simulation and it's comparison with the uncoupled run. if the author decide to keep this simulation in the end I think that presenting it without a plot of the transient evolution is not very useful.
We agree. As state above, the revised comparison will focus on a comparison between coupled and uncoupled runs and the transitive behaviour. Basically, the setup will be the reference simulation with more interesting forcing from seasonal melt runoff.
- Line 330: The Budd friction law given here is the good one the one on line 291 should be changed.
Changed.
- 3.2 Performance : I expect that the new set-up is needed because the synthetic one is running to fast to be relevant to spot performance issue? If that is the case that should probably be stated here.
You are correct, the synthetic setup does not exhibit realistic performance characteristics (e.g., lower iteration counts) due to its low complexity. We will add this to the revised manuscript.
- Line 346: I am not sure that not restricting CUAS to warm-based ice is the good way to go. If I am not mistaking the effective pressure can be computed everywhere even if there is no water and then it just becomes the ice overburden pressure. In this case wouldn't it be better to run CUAS only on warm nodes and then update the effective pressure to be equal to overburden ice pressure on the frozen nodes. My main concern with that is that frozen region could actually have an impact on the simulated subglacial drainage system as they are acting as dams preventing the flow of water which could change the water pressure distribution at the base of the ice.
This could be a modeller's choice. The current design of the CUAS preCICE adapter would allow for later improvements to deal with frozen areas. For the simulation presented here, we decided not to introduce discontinuities in the effective pressure field at the cold/warm transitions. In the cold ice areas, the basal melt rate is zero, the effective pressure is relatively high, and the sliding velocity is low.
We will mention the possibility of "damming" in the real hydraulic system, the limitation of our setup in this regard, and the reasoning behind our setup.
- Paragraph Line 361 to 363: This paragraph is not very clear and should be rephrased.
Changed.
- Line 368. I don't see why serial and parallel coupling would reasonably have different results if everything else is equal. The only difference I can see is that the coupled scheme as some idling time for one of the solver but don't they get the same result from the other participant anyway? I feel that I am missing something here that should probably be explained better.
In serial coupling, the second participant immediately uses the results that were just computed by the first participant. Thus, feedback may be propagated faster. This is probably not an issue if the coupling window is sufficiently short. As explained above, using shorter coupling windows should not be a problem due to low data mapping overhead. We will add this missing detail of serial coupling to the description of preCICE in chapter 2. We will also add this explanation to the discussion.
- Figure 9-10: It would be nice to have the simulation with 384 or 792 CPUs presented in both those figure to have a better idea of the difference between the two coupling schemes.
You are right that the two figures are not easily comparable. However, because the distribution of CPUs is different between parallel and serial coupling, this is not entirely avoidable. Either the CPU count for one or both of the models or the total count will be different. We have picked experiments where the number of CPUs used by ISSM matches. The focus of these figures is on the general sequence of computation and wait times. Total runtime is compared in Fig. 11 and following.
Note that the number of CPUs in the figures is misleading. In Fig. 9, 10, and 11, the number refers to the combined CPUs used for both participants. In Fig. 12 and 13, the number refers to the CPUs used by each participant individually. We will improve the labels of the figures.
Discussion and Conclusion.
- I think that a better point could be made to stress the pros an cons of this coupler against earth system specific couplers. The conclusion should also make a better job at synthesising the papers results rather than presenting future work. I also think that the reference to Keyes should probably appear in the introduction of the paper rather than here.
We agree with all of these points. As stated in our opening remarks, we will expand the comparison of different types of couplers. We will rephrase the conclusion to summarize the results of the paper in relation to the goals outlined in the introduction. We will add the reference to Keyes to the introduction.
- Line 415: I am probably biased here but I think that citing DOCO (deFleurian 2022) here would be more relevant as this model as actually been coupled to an ice dynamics model but I am not aware of an application where SHAKTI was coupled.
SHAKTI (as well as other hydrology models) is integrated into ISSM, and therefore coupled as well, albeit not using a coupling library. This makes it a relevant comparison. But you are right that it’s not the only relevant comparison, and we will add more references to externally coupled models.
Citation: https://doi.org/10.5194/egusphere-2025-3345-AC2
Data sets
Coupling ISSM and CUAS-MPI: example cases Daniel Abele, Thomas Kleiner, Yannic Fischler, Angelika Humbert https://doi.org/10.5281/zenodo.15849146
Model code and software
ISSM-preCICE adapter Daniel Abele, Angelika Humbert https://doi.org/10.5281/zenodo.15785544
CUAS-MPI with adapter for the preCICE coupling library Yannic Fischler, Thomas Kleiner, Daniel Abele, Angelika Humbert https://doi.org/10.5281/zenodo.15782324
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 1,953 | 93 | 28 | 2,074 | 30 | 33 |
- HTML: 1,953
- PDF: 93
- XML: 28
- Total: 2,074
- BibTeX: 30
- EndNote: 33
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
Summary:
This manuscript presents a new coupled setup that implements a two-way data exchange between the Ice-sheet and Sea-level System Mode (ISSM) and the subglacial hydrology model CUAS-MPI. The coupling interface between the two models is implemented using the generic coupling library preCICE. The "adapter" implemented as interfaces between the models and preCICE are described in detail. Simulations using this coupled configuration for a synthetic setup were performed to demonstrate the successful interaction between the two models. In addition, performance measurements were performed to showcase the low impact that preCICE imposes on the overall runtime and scalability of the coupled models.
The focus of the paper is on the technical aspects of the implementation of the preCICE adapters for the two models, the configuration of the coupled setup and the usage of a generic coupler in Earth system modelling in general. These points are not completely new and primarily of technical nature. The manuscript could be improved by enhancing its scientific significance. For example you could evaluate the results of the synthetic simulations in terms of how the two-way coupling improved the results compared to a stand-alone simulation that uses file-based input data. Additionally, a more detailed comparison of preCICE with specialised coupling libraries/frameworks (e.g. OASIS3-MCT, YAC, or ESMF) in terms of functionality and potential benefits could be included. Also the pros and cons of using a generic coupler vs a specialised one and their respective use-cases could be discussed.
I recognise the potential interest of the Earth system modelling community in preCICE being used in this context. Therefore, I would consider this manuscript for publication in GMD, after a major revision.
General comments:
Abstract:
Please formulate the goals and achievements of the work presented by this paper more clearly.
Specific comments:
Line 1: "Accurate earth system models"
Is there such a thing as an accurate Earth system model?
Line 30-31: "on different spatial and temporal discretizations, which complicates the coupling"
Isn't handling these issues one of the main functions of a coupler?
Line 34: "time points where"
Maybe: "time intervals at which"
Line 35: Repeated start of a sentence
Line 35-36: "provides sophisticated numerical methods"
Which "numerical methods" do you mean (e.g. implicit vs explicit time stepping, remapping methods, ...)? Please be more specific.
Line 37-38: "easy to either add component like an existing ocean circulation model"
I do not think, that this is the case. Reasons will be discussed further below.
Line 41 and 42: "Sect. 3"/"Sect. 4"
No abbreviation for "Section 2" was used.
Line 46: "overview of the coupling"
Maybe: "overview of the coupling setup"
Line 46: "existing codes"
Maybe: "existing three codes"
Line 50: Add long form of preCICE abbreviation
Line 51-52: "handles communication, data mapping, and coordination of the solvers"
Maybe: "handles communication, data mapping, and coordination between the solvers" to explicitly exclude communication like halo/ghost cell exchanges within a solver.
Line 53: "calls the preCICE library"
Maybe something like: "call initialisation routine of ..." would be more specific.
Line 53: "all options"
Maybe: "all preCICE configuration options"
Line 53-54: "This approach requires..."
In my opinion this is something for the conclusion.
Line 54-55: "the respective algorithms are selected"
Be more specific on the algorithms. Which task will they perform.
Paragraph 2.1:
Could you add more information about the adapters? Are they part of the solver, preCICE or independent codes. Who usually implements them (preCICE or model developers)?
Line: 62-63: Sentence "To establish the communication ..."
To me these are implementation details not relevant for this paper. How about:
"Communication channels between the processes of both solvers are established using a highly scalable algorithm (Totounferoush et al., 2021)."
Line 70-71: Sentence : "However, if we wanted..."
This might be more suited for the discussion.
Paragraph 2.1.4
Could be part of the introduction as part of "state-of-the-art".
Line 89-90: Sentence "One of the goals..."
Should be mentioned in the abstract and in my opinion would fit better in the introduction section.
Line 133-134: Sentence "This could be resolved..."
Is this relevant for the paper?
L149-152: Sentence "There are different...."
This general description of adapters could be included in section 2.1
L164: "linear cell interpolation"
Add reference to clarify its meaning.
L179: "Partial interfaces"
In Earth system modelling, I would consider the term "masking" more appropriate.
L284: Section 3.1
Add minimal text to introduce this section.
Figure 6:
What additional information is provided by this figure when compared to Figure 1?
Section 3.1.2 Results:
If possible, this section could be improved by evaluating the results in terms of quality of simulation results: Does the coupled setup produce more accurate results than the uncoupled ones?
Line 382: Sentence "This also includes..."
The imbalance between solver initialisation times could be explicitly measured by introducing an additional synchronisation point.
Line 383-386: whole paragraph
Maybe simplify by: "3. Running the solver for one coupling window."
Line 390: "i.e., everything that happens before the participants or the first participant in a serial coupling"
This part is unclear to me and looks incomplete. "everthing that happens before the participants [..] in a serial coupling" do what?
Line 389-393: whole paragraph
Initially it was not clear to me, that in both cases (serial and parallel coupling) both models have their own dedicated processes, but that in the serial case the two models are allocated to the same resources, while in the parallel case have each have their own dedicated computing resources. This could potentially be phrased more clearly.
In MPI receive operations are often implemented as busy-loops that wait until the requested data is available. If the advance step contains an MPI receive operation, while waiting for the result from ISSM, CUAS might generate a significant load on the shared resources in the serial case. Therefore, a potentially interesting experiment could be the comparison of a setup with serial coupling using shared or dedicated resources.
Line 394-397: whole paragraph
Does this represent a processes which in climate science is often refereed to as "spin-up"? Could this be avoided by initialising the models using restart files from a previous run?
Figure 9: Sentence "With few processes, ..."
I would assume that the idle time for the 768-process case is higher due to CUAS having better scaling properties than ISSM. With 192 processes you seem to have hit a sweet spot, where both models roughly require the same time for the simulation of one coupling window.
Line 401-403: whole paragraph
Do solver iteration counts differ between serial and parallel coupling? Could you quantify the differences between the execution time of a coupling window of both cases? Can these differences exclusively be explained by the imbalance between the two solvers, which can be measured by the "advance" timer?
Figure 12:
For 192 processes ISSM has an execution time for a single coupling window of around 450 s and CUAS 200 s. However in Figure 9 both models seem to have for the same number of processes and each an execution time of around 650 s. Did I misinterpret the figures or how do you explain the inconsistency between the two figures?
Based on the measurements, I would assume that a parallel coupling setup with 512 processes for ISSM and 192 processes for CUAS should perform quite well. Do you agree? Why did you not use similar setups for the measurements of Figure 9?
I personally would start the performance analysis with this data and base the other measurements on its results.
Figure 13:
Does this again contradict the measurements from Figure 9?
Line 407-412: whole paragraph
Instead of defining a process count ratio, wouldn't it be easier to determine the number of processes for each component for a fixed coupling window execution time based on the measurements from Figure 12?
Section 3.2.2 Results
The remapping and communication time between two coupling windows usually is negligible in climate setups. However, initialisation time can have a significant impact on the overall runtime and this is a factor, which is often of interest to users. Therefore, this should be analysed in more detail.
(See [1])
Line 416-419. "CUAS and ISSM use [...] are easy to adapt."
This are properties which in my opinion are not unique to preCICE but shared with other coupling solutions. This could be clarified in the discussion.
Line 420-425: whole paragraph
Please include an analysis of preCICE's impact on initialisation time.
Line 425: Sentence "This imbalance..."
Can you explain why it is not possible in this case?
Line 440-441: Sentence "For example, the ice sheet code..."
I was told that this example does not make much sense, since hydrology is already included in Elmer/Ice [3].
Section Discussion:
If I am not mistaken, your adapter design should allow for a single executable setup using serial coupling, in which ISSM is called by the CUAS-preCICE adapter within the CUASSolver. This could significantly improve performance of the serial setup. Did you consider this option? Maybe you can a discussion about this.
I think in order to improve the scientific value of the paper you should extend the discussion on pros and cons of using a generic coupling library compared to one specialised for Earth system modelling. In case you are interested here are a few points: * Usually specialised couplers like OASIS3-MCT or YAC basically provide the coupling of 2D fields on a spherical surface. For global circulation models that simulate for example an atmosphere or an ocean, this constraint is adequate. For the coupling of these models specialised couplers are probably the best choice since they also provide user interfaces and remapping options that are optimised for these cases. In addition, these models have to my knowledge not the required software infrastructure to efficiently support implicit coupling provided by preCICE.
* For the remapping of properties like surface freshwater flux conservative remapping as described in [2] is used. This remapping scheme requires the usage of spherical geometry during the interpolation weight computation. This is currently not supported by preCICE and its support could become difficult. This could be overcome by supporting a remapping scheme based on user-provided weight files. These weight files could be produced in a preprocessing step by specialised tool (e.g. CDO, ESMF, or YAC).
* I think the best use-case for a general coupling library in earth system modelling are cases where the constrain to the coupling of 2D spherical fields is insufficent (e.g. glacier ground interaction as described in this paper). It could even be feasible and interesting to set up configuration where coupling between ISSM and CUAS is implemented using a general coupler like preCICE and in the same setup additional coupling with an atmosphere and or ocean model is implemented with a specialised coupler.
Line 455-456 Sentence "The use of preCICE..."
See comment above.
Line 465: "https://git.rwth-aachen.de/yannic.fischler/cuas-mpi"
The associated repository is not publicly available. The link to the version used for the paper is available, so removing the link to the current version would not be a problem.
[1]: https://raw.githubusercontent.com/IS-ENES3/IS-ENES-Website/main/pdf_documents/IS-ENES2_D10.3_FV.pdf
[2]: https://data-ww3.ifremer.fr/TRAINING/DOC/SCRIPusers.pdf
[3]: https://kannu.csc.fi/s/6CRGEdSZPEajnL6