the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
OceanVar2.0: an open-source variational ocean data assimilation scheme. Sensitivity to altimetry sea level anomaly assimilation
Abstract. This study presents recent developments of the OceanVar oceanographic three-dimensional variational data assimilation scheme to create OceanVar2.0. The code has been extensively revised to integrate past developments into a single, consistent, fully parallelized framework. In OceanVar, the background error covariance matrix is decomposed into a sequence of physically based linear operators, allowing for individual analysis of specific error matrix components. We focus on the sea level operator, which provides correlation between Sea Level Anomaly, temperature and salinity increments. OceanVar2.0 offers the flexibility to use either a dynamic height or a barotropic model for closed domains as sea level operators. A diffusive operator to model the horizontal error correlations, replacing the previously used recursive filter, has been implemented. The new code was tested in the Mediterranean Sea and the quality of the analysis assessed by comparing background estimates with observations for the period January–December 2021. The results highlight the better skill of the barotropic model operator with respect to the dynamic height one due to the assumptions required for the level-of-no-motion. Furthermore, we present a method to assimilate along track satellite altimetry considering a forecasting model with tides.
- Preprint
(2053 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 13 Sep 2025)
-
EC1: 'Comment on egusphere-2025-1553', Julia Hargreaves, 13 Jul 2025
reply
Please respond to this comment from a potential reviewer who has declined to review the manuscript.Â
"I made a quick check of the manuscript. Unfortunately, the analysis RMSEs are higher than the prescribed observation errors, and this is inconsistent with the basic data assimilation theory. Consequently, I conclude that the system does not function correctly, and therefore, I suggest rejecting the papers"ÂCitation: https://doi.org/10.5194/egusphere-2025-1553-EC1 -
AC1: 'Reply on EC1', Paolo Oddo, 16 Jul 2025
reply
Dear Editor,We agree that a fundamental goal of data assimilation is to reduce uncertainty, and thus the analysis error variance is indeed expected to be smaller than observation error variance. However, we believe there might be a misunderstanding regarding the metrics presented and their direct comparison in our study. In the context of realistic complex geophysical modeling, the true state remains inherently unknown. Consequently, directly computing the analysis RMSE against the true state is not feasible. Such a calculation would only be possible in a perfectly controlled Observing System Simulation Experiment (OSSE), an approach not within the scope of this article.In our manuscript, we do not explicitly compute or present analysis error diagnostics. In a 3D-Var system, computing these can be computationally intensive (e.g., via the inverse of the Hessian). Instead, our manuscript consistently focuses on background minus observation misfits, comparing statistics between pure simulation and assimilative runs. Our system's proper functioning is demonstrated by the improved statistics of assimilative runs' misfits compared to corresponding pure simulation statistics.Specifically, we present the uRMSE for forecast-minus-observation, together with the Correlation Coefficient , Standard Deviation Error, and Mean Bias. All assimilative runs show improved statistics compared to the pure simulations, indicating the correct functioning of our system. This diagnostic approach, while seemingly empirical, is widely used in ocean data assimilation.For satellite data, our misfit uRMSEÂ is even lower than the 3cm observational error. This 3cm value is our comprehensive estimate of the Sea Level Anomaly (SLA) observational error, designed to incorporate both instrumental and representativeness error.The reduction in uRMSE between simulation and assimilative runs demonstrates our data assimilation system correctly functions by significantly reducing differences between the model background and observations, thereby improving the overall model state and subsequent forecast quality. Moreover, the difference between the model background and independent observations (Figure 5 for depths shallower than 150m) is also improved in the assimilative runs. This improvement of the assimilative runs over the pure simulation holds true for temperature and salinity data as well, despite the scarcity of ARGO observations.We hope this clarification addresses your concerns and provides a clearer understanding of our results. We can state more clearly how the statistics are computed and their implication for analysis error versus statistics against observations.SincerelyCitation: https://doi.org/
10.5194/egusphere-2025-1553-AC1
-
AC1: 'Reply on EC1', Paolo Oddo, 16 Jul 2025
reply
-
RC1: 'Comment on egusphere-2025-1553', Anonymous Referee #1, 29 Jul 2025
reply
The study describes the development of OceanVar2.0, that is an oceanographic data assimilation method provided with open source software written for highly parallel computing environment. The manuscript further compares the method’s performance in the Mediterranean Sea with respect to different options for assimilating Sea Level Anomaly (SLA) observations. The performance of assimilating SLA observations is particularly important in the Mediterranean Sea and many other regional seas and the global ocean, because often they represent the largest observational data set available in real time. I think that the study is well written. It provides an important insight into the improvements obtained by sophisticated application of a variational data assimilation scheme in oceanography that can be used as a reference for future applications of OceanVar2.0 and development and testing of other oceanographic data assimilation schemes. In particular, it highlights how applying a dynamical barotropic ocean model to simulate SLA perturbations within the model of the background error covariance matrix may significantly improve the variational data assimilation performance with respect to more commonly used simpler assumptions. Irecommend the publication of the manuscript in Geoscientific Model Development after addressing a few minor comments.
Â
Minor comments:
1. Line 140 and several other lines: I guess that DB08 should be DP08.
2 Line 270: Can removing the bias due to tides in observations and model forecasts increase the observational error? The two models have different bathymetries and use different computational methods for simulating the impact of tides. Is the atmospheric pressure forcing removed from SLA observations with the barotropic model?
3. Lines 380-390: Problems with vertical stratification might be also due to the calculation of vertical EOFs with difficulties to provide correct temperature and salinity increments from SLA assimilation during summer. Are there alternatives for using EOFs calculated from long model simulations in shallow areas?
4. Line 500: Does the minimizer converge more slowly, because the barotropic model is slightly non-linear or becasue it includes more complex dynamical processes requiring a slower convergence?
5. Lines 503-510: Is the land-see mask used to optimise the domain decomposition? According to Fig. 1, with a higher granularity many subdomains may become completely over land and this can reduce the scalability of the software. Are some subdomains computationally more demanding than the others, for example, due to extra computations near the coast? Eventually, the evaluation of the parallel performance could show the scalability of different parts of the software (e.g. barotropic model, geostrophic adjustment etc.).
Citation: https://doi.org/10.5194/egusphere-2025-1553-RC1 -
RC2: 'Comment on egusphere-2025-1553', Anonymous Referee #2, 28 Aug 2025
reply
The manuscript describes the application of sea level anomaly assimilation utilizing an incremental 3D-Var assimilation scheme. The manuscript introduces the common incremental 3D-Var scheme and the modeling of the background error covariance matrix, which utilizes a series of covariance operators. A particular aspect are two variants of the covariance operator for sea level, one uses dynamic height, while the other uses a barotropic model. Also the observation error covariance matrix is described. The main focus of the manuscript are the numerical experiments applying the data assimilation scheme to assimilate sea level anomaly data into a model for the Mediterranean Sea which includes the simulation of tides. Here, aspects like correcting assimilation innovations for tides and the sensitivity with regard to the configuration of the covariance operator for sea level anomaly, e.g. regarding the used reference or rejection depth levels are assessed. Finally, also the variation of the number of iterations in the optimization with the chosen configuration of the covariance operator and the parallel compute performance are discussed. The manuscript concludes that only the sea level operator using the barotropic model is sufficiently able to provide good assimilation results in the model domain due to its strongly varying bathymetry, while the dynamic height operator has limitations due to the higher sensitivity to the required specification of the level of no motion. The study also finds that the using barotropic model only leads to very small time increases.
The manuscript is overall well structured and the experiments are carefully done. Unfortunately, my experience with sea level operators for variational data assimilation is too limited to give a statement whether the application of a barotropic model operator is new. Actually, the authors refer to 'DB08' for this operator, but DB08 is undefined. I suppose that they mean the work by Dobricic and Pinardi from the year 2008 (thus rather 'DP08'), which introduces an operator using a barotropic model. DP08 did not include the code, but given that this work was published about 18 year ago by some of the original developers of OceanVar this at least gives the impression that the methodology is not new.
The major weakness of the manuscript is that it claims to present the open source software OceanVar2.0 including an assessment of the sensitivity of sea level assimilation on the covariance operator for sea level. In fact, the manuscript clearly fails to present the software. The link to the software repository is provided and there are statements that it has a modular structure and, for example, produces for quality control of observations. However, any details on the structure of OceanVar2 as a software are missing. Further, the variational assimilation algorithm is mathematically described, but without hints on its actual implementation. Only section 6 "Performance and Parallelization" is more computationally oriented, but represents only a small part of the manuscript and includes very little detail. To this end, the actual topic of the manuscript is the effect of the sea level covariance operators while the software is just a side aspect. Thus, the manuscript is less a 'Development and technical paper' but rather an application paper.My recommendation on the manuscript is that the authors should decide whether they like to publish this work, in a slightly revised form, as a study on the effect of the covariance operators (if this is sufficiently novel) or like to revise it into a manuscript onto the actual software. The first variant would involve less work (a minor revision), while the second variant would involve a major revision to include sufficient details on the code structure and functionality in combination with a significant shortening of the experimental part into a demonstration of an application. Both variants should be suitable for GMD and it would be the authors' decision in which direction to go. (In the review system, I will mark this as a 'major revision', but the revision for the first variant would only be minor.)
Major comments (my comments are mainly aimed for the first revision variant - the application of studying the effect of the covariance operators).Overall, the authors should show more care when preparing the manuscript. This is a manuscript with 13 authors, and the . With so many authors, an error like writing 'DB08' instead of 'DP08' should not happen if only one of the authors would have read the manuscript carefully. There are also other places, where the manuscript does not leave the impression of a careful proofreading. Generally, all co-authors are responsible for the quality of a submission and should take this responsibility seriously.
On OceanVar2.0 as an assimilation 'scheme': The manuscript denotes OceanVar2.0 as a 'variational ocean data assimilation scheme'. I don't think that this is correct. Rather, OceanVar2.0 is the software that implements an incremental 3D-Var scheme. Thus, the actual scheme is incremental 3D-Var. From the description in the text, this incremental 3D-Var also seems to be a standard implementation as, e.g., described in the review by Bannister (2017). Also the use of covariance operators is common practice. Â To this end, it is the actual implementation which is particular, but the algorithm itself is not new. Please rephrase the manuscript accordingly.
Section 2: The section reads irritatingly because it discusses a common incremental 3D-Var scheme with FGAT, which is widely used in ocean data assimilation, but the text is written as if a new method is introduced. It would be useful to clarify the typo of the mathod at the very beginning of the sections. To improve the structure, I recommend to move the explanation of FGAT (mentioned in line 111 with some details in lines 115-118) to the end of the section. Please also improve the explanation and clarify the notation of the equation in line 117. I don't see how this does actually explain FGAT (the equation uses innovations in an interval, while FGAT means that each innovation is computed at the time of the observation.)
Sections 3 and 3.1: As mentioned before, the text refers to DB08, which probably means Dobricic and Pinardi (2008), which is defined as 'DP08' in line 50, for the covariance operators, but then not used. The sections include the sea level operator using a linear barotropic model (line 181). Here it would be important to clarify how far this is new. It appears contradictory that the previous developers of OceanVar published the use of the barotropic model, but apparently did not include it in OceanVar.
Section 5 (Here, I focus on the overall content, while more specific comments are further below):
The discussion of the results is quite detailed, but parts of it are also only short and add little insight. I recommend to overall shorten this part and to focus on the relevant insights, skipping more superficial discussions. In particular:
- For me it is unclear why the authors decided to perform two sets of experiments, one without correcting velocities and one with such correction. Actually, multivariate data assimilation is the usual standard today. Thus, updating the velocity components when assimilating sea level data appears to be common practice. Given that no particular insights are found from the separate discussion of the two sets (the order of the skill is not significantly different), I recommend to focus on one set. This should preferably be the multivariate case updating also velocities.
- The relative improvement score for the correlation coefficient, S_CC, appears to result in unreasonably large values. This is due to the division by the reference correlation, which in cases where the correlation in the free run is below 0.1, leads to high values. Given that correlations have a well-defined upper limit of 1, the normalization by the reference correlation is not required. It would adequate to simply assess the difference in the correlations between the free run and each assimilation experiment, and I recommend to do so. The results would be mainly the same, but without weirdly looking improvements of up to nearly 10000 % for a bounded quantity like the correlation coefficient. Perhaps, one would also see the differences of the effects in different assimilation experiments in Figure 7 more clearly.Â
- Figure 8 shows the scores relative to the baseline assimilation experiment Exp-1. The figure as well as the related discussion is mainly redundant with the information included in Figure 7 and its related discussion. Differences between the different assimilation experiments are already visible in Fig. 7. It is just that Fig. 8 quantifies these differences directly. However, this also seems to amplify apparent differences, which are actually very small in Fig. 7. To this end, I recommend to omit Fig. 8 and the related discussion. If the authors like to point out particular differences that are currently only discussed in relation to Fig. 8, they can include these into the discussion relating to Fig. 7.
- Table 2 gives an overview of scores, but the text only refers to the table without ever discussing details. I don't see that the table contains relevant information beyond what was already discussed in relation to the figure. Hence the table can be dropped without loosing relevant information.
- Figures 9 and 10: As mentioned before, I don't see an added benefit of discussing the two different sets of experiments (with and without updating the velocities). In fact, the discussion of the detailed Figures 9 and 10 are only 12 lines of text. There does not seem to be additional insight except that the "differences between the experiments are less pronounced" (lines 474/475). This insight could also be included into the manuscript as a single sentence without adding figures.
- Section 5 contains particularly many grammatical errors. It would be good if the authors would spend some time to improve the writing quality of this section.Section 6:
This section discusses the compute performance. It should be of not particular relevance if the focus of the paper is targeted to the application of SLA assimilation. Apart from this, there are some critical points:
- Line 485 states that "Rigorous testing has been conducted to guarantee bit-for-bit (BFB) reproducibility ...", but then in lines 487/488 it is clarified that the global matrix-vector multiplication used in the solver algorithm precludes such reproducibility. To this end, it is unclear for what parts the reproducibility was ensured. For simple distribution of arrays, the reproducibility is always ensured, while global parallel sums (and global matrix-vector multiplications include such) do not ensure reproducibility. Please clarify this aspect.
- Figure 11 and related text: Figure 11 shows distributions of the number of required iteration of the solver for the different experiments. This information is far more detailed than the related text. The main insight from the figure is that the number of iterations can vary strongly between about 12 and 45. However, the median number of iterations varies only between 24 and 26 iterations, which is insignificant given the wide spread. To this end, I don't see any point in the discussion in lines 495-502. The insight is instead that the different configurations do not lead to any relevant difference in the required number of iterations.
- Figure 12 and related text: The text claims that  "Up to 36 cores, the experiment with the dynamic height operator consistently outperformed the one using the barotropic model" (lines 507/508). However, the figure shows that the difference of the average run time is nearly identical with a difference that seems to be below 1 second for most cases. At the same time, the run times show a large variation of 10 seconds or more. Thus, there does not seem to be any statistically significant difference in the average run time. The authors also discuss that the performance improves up to 36 or 72 cores depending the on the experiment, but shows not further improvement for larger numbers of cores. This discussion omits the important fact that there are cases where the run time increases for more than 36 cores. In addition, the figure shows that the speedup is in fact very limited. For the blue line of Exp-4 only a speedup of about 8 is obtained when increasing the number of cores from 1 to 36 (from about ~40 seconds to ~5 seconds). This implies a parallel efficiency of about 22%. Thus, the scalability of the algorithm is very limited despite the large model grid of 1307x380x141 points. I recommend that the authors also discuss the limited scalability and its possible reasons. Also the discussion of the variation of run times would be more relevant to discuss than the insignificant difference of the average run times.Section 7:
As for previous recommendations, I recommend that the authors carefully reconsider the scope of the manuscript.
- Please also avoid contradictory statements like that "Computationally the barotropic model is more expensive than the dynamic height operator" (line 533) followed by "large time-step significantly limiting the computational demand." (line 535). This statement says that the barotropic model is more costly, but the higher cost is very small. One could write directly in a concise form that the barotropic model does only lead to a very small increase in run time.
- In line 539 it is stated "The OceanVar2 code is stable, robust, its previous versions have been largely documented in several scientific papers...". If the manuscript is about OceanVar2, which is a revised code, it should be irrelevant that previous publications already described previous versions of OceanVar. This holds even more as no previous papers are referred to (This keeps me from checking if it is true that previous version of OceanVar core are actually documented and accessible.) Please ensure that the statement is consistent and focused.
- The last sentence of the section shortly points to future developments. This sentence is speculative 'Future developments could explore', but also superficial. I recommend to drop it or to replace it with aspects that are more concrete.
Further comments:lines 32/33: The authors cite De Mey and Robinson (1987) on the requirement of 'advanced extrapolation algorithms' for the 'effective integration' of satellite altimetry and sea surface temperature data. I have the impression that this statement and citation is outdated. Data assimilation advanced strongly since the year 1987. Nowadays, the assimilation of altimetry and surface temperature is common practice. I don't see any point in stating that nearly 40 years ago, this was considered a challenge.
lines 44/45: the text states "...in developing shared software tools like PDAF (Nerger et al., 2005), ROMS-4DVAR (Moore et al., 2011), DART ..." Here, ROMS-4DVAR does not fit into the list. While PDAF and DART are general DA frameworks, ROMS-4DVAR is a particular implementation for the model ROMS and not otherwise usable. Please revise this list. Further, the reference for PDAF is incomplete and one cannot find this proceedings paper based on this incomplete reference. It can also be useful to cite Nerger and Hiller (2013), which is a peer reviewed publication.
lines 46/47: Following the mention of some DA software tools mentioned above, the text states "However, the increasing complexity of data assimilation problems, particularly with the adoption of ML, demands continued research and development of new tools." In this sentence the authors seem to claim that the systems mentioned before do not include 'continued research' and that they cannot adopt ML. This is clearly incorrect for PDAF and DART, which are regularly updated with new releases. Please rephrase this to avoid such claims.
line 54: The reference Coppini et al. (2023) is not a proper citation. Is this a preprint; is there a published version?
line 56: The statement on OceanVar "has been used to test and develop numerous new schemes and features" is not useful in this form and seems to be mainly aimed at adding references in which the authors are involved. I recommend to either include more information on the 'new schemes and features' or rather to remove this statement.
lines 63/64: The authors described line 60 that OceanVar2 "involved comprehensive testing and debugging" and here state "This rigorous development positions OceanVar2.0 as what we consider a leading advancement in ocean data assimilation". Apparently, the authors consider testing and debugging to be particular. However, this is just common practice in software development. I get the impression, that the statement might imply that OceanVar was previously not carefully tested and debugged, thus the development did not follow common development practices. This potential weakness however, does not give a basis to claim that the testing and debugging is a "leading advancement in ocean data assimilation". Thus, I recommend to remove this statement.
line 75: 'revisited' should likely be 'revised'
lines 77/78: It is state that "OceanVar2 is  applied to a newly developed Mediterranean Sea circulation forecast model". This statement looks misleading. The model does not seem to be a 'new' model, but an advancement of the previous model system resulting from adding tidal forcing.
line 82: 'Section 1' should be 'Section 2'
line 90: It's stated: "x is the true state". I don't think that this is correct. 'x' is the state what is to be estimated by the DA process. The true state is unknown and one would not be able to change it.
line 93: It's odd to see the definition of particular state vector variables in a general introduction to the 3DVar method. I recommend to more this to an appropriate location describing the experimental setup.
line 112: "The minimum of J(\delta x) in (3) is obtained for x_a = x." Please take into account that there is no 'x' in Eq. (3).
line 121: 'defined' should likely be 'definite'
line 191: The sentence in this line is an outlook, which doesn't fit into the scope of this section. Perhaps, it can be mentioned in the conclusions.
line 222: Here, Brankart et al. (2009) is cited with regard to the observation error, which looks misleading. Obviously, this study did not introduce the concept of observation errors, or the distinction of measurement and representation error, which is used for much longer time. Perhaps, one can cite Brankart et al. (2009) with 'see' in the sense of a review. Alternatively, the authors can look for a more original paper.
lines 234-239: Here statements like "Several options are implemented in the OceanVar2 to shape the observation error and the interested reader is asked to consult with the code manual, available with the code." or the description of possible quality control options are very superficial. If the authors decide to focus the manuscript on the application, then these descriptions should be revised to describe particular methods that are used in the study. If the authors decide to revise to present the software OceanVar2, more concrete details would be needed.
line 256: The sentence in this line looks redundant to that in line 250. I recommend to remove it.
Figure 1/line 259: The figure shows the satellite track for SLA data over 21 days. Showing this long period seems to contradict the fact that daily assimilation is done. It would be more meaningful to show the satellite tracks for one day. This would give an indication of the data coverage.
Lines 267/268: The manuscript states "This paper offers a solution to the assimilation of satellite altimetry in a model with tides, showing that a filtering procedure can be accurate enough..." I'm wondering why it should be useful to filter out tides using some tidal components? Since tides are both in the observations and the model wouldn't it be better to actually assimilate the full signal, hence correcting also the tides? Please comment on why the filtering is your preferred approach (I also recommend to write 'This work' rather than using the colloquial 'This paper')
Line 288 and elsewhere: The authors use 'simulation' to denote an experiment without assimilation. This terminology is irritating given that all experiments are actually simulations. I recommend to use a more common and less irritating terminology like 'free' or 'free run' to denote the experiment without data assimilation.
Line 300: Please be more specific about how the observation errors were tuned. Simply referrring to Desroziers et al. (2005) looks insufficient.
Line 303: Observation errors of 0.1degC and 0.02psu appear very small given that there should be significant representation errors given the model resolution of about 4 km. Please comment on the reansing for the chosen values.
Lines 304-322: The structure of the text in which the experiments are introduced from line 304 onwards is irritating. It is mentioned that two 'sets of experiments' are performed, but then six experiments are described. Only later, in line 319, one learns that the six experiments are actually 'one set' and that there is a second set. I recommend to already define the difference of the two sets after the initial sentence in line 304 and to clarify that each 'set' consists of six actual experiments.
Line 330: The first sentence is very difficult to read due to sub-clauses that break the logical flow. Please reformulate it for clarity.
Lines 345-361: I'm wondering why the explanation of the scores is included within the description of the actual results. It should be more meaningful to include it as a subsection in Sec. 4.
Lines 368/369 "A slight but consistent improvement is noted in the CC." What makes the improvement 'slight'? Changing a correlation from 0.3 to 0.7 looks more than 'slight'. Please avoid such subjective classifications and concentrate on the numbers.
Line 370: "The SDE in the simulation is generally negative". This statement contradicts the figure, which shows positive and negative values of the SDE. It just that the amplitude of negative SDE is higher than those of positive SDE. Please correct the statement.
Line 373: "The SLA ... statistics were clustered according to ocean depth". This statement is misleading. Actually, to my understanding from later text, the clustering is not for depth intervals, but for regions in which the ocean floor is at a certain depth (line 472 states 'clustered according to the bathymetry'). Please clarify this here.
Line 375; "more constant" - this is one of the examples showing little care in writing. 'constant' cannot be raised.
Line 386: Here, "In Fig. 6 the..." repeats the beginning of the previous sentence. Further, it is unclear why a new paragraph is started here.
Line 425: "These results highlight the challenges associated with choosing an appropriate level of no motion". I don't see where the "challenges" are. Basically the experiments show which level does perform well. Unfortunately, the experiments do not seem to provide insight into why a certain level performs better. An obvious aspect is that the omission of observations due to the specified level counters possible positive effects of including observations with a possibly sub-optimal level. This aspect finds little attention in the discussion and should be made more explicit.
Line 441: It is stated "... see that the Exp-4 and Exp-5 produce, in general, worse results, and the worsening is amplified as the depth increases and the level of no motion decreases." I have the impression that this is what we simply expect from physics: Assuming an unrealistic level of no motion yields worse results. I'm wondering that the authors just set the level of no motion to some essentially arbitrary value, while it should be known at which approximate depth the actual level of no motion exists. Including more physical insight might help here.
Lines 466/467: "the relative performance of the different experiments seems to be confirmed". Given that the actual values are available there should be no need to write a speculative statement like 'seems to be confirmed'. The authors can clearly assess the performance and provide a concise statement.
Line 481: The text states that "To optimize computational performance ...adopts a domain-decomposition". Actually, domain-decomposition does not "optimize" computation performance, but it can improve it. Please rephrase.
Line 484: Please add a reference for MPI
Line 556: 'OP' should likely be 'PO' for Paolo Oddo.
I recommend to also carefully check proofread the manuscript. There are various places where singular or plural are incorrectly used, or where the grammar is incorrect. While there are such errors throughout the manuscript, Section 5 including its sub-sections contains particularly many grammatically errors like incorrect word ordering.Â
References:
- Bannister, Q. J. Roy. Meteorol. Soc. 143(2017) 607-633
- Nerger and Hiller, Computers and Geosciences 55(2013) 110-118Citation: https://doi.org/10.5194/egusphere-2025-1553-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
512 | 42 | 15 | 569 | 8 | 19 |
- HTML: 512
- PDF: 42
- XML: 15
- Total: 569
- BibTeX: 8
- EndNote: 19
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1