the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Leveraging Google’s Tensor Processing Units for tsunami-risk mitigation planning in the Pacific Northwest and beyond
Abstract. Tsunami-risk and flood-risk mitigation planning has particular importance for communities like those of the Pacific Northwest, where coastlines are extremely dynamic and a seismically-active subduction zone looms large. The challenge does not stop here for risk managers: mitigation options have multiplied since communities have realized the viability and benefits of nature-based solutions. To identify suitable mitigation options for their community, risk managers need the ability to rapidly evaluate several different options through fast and accessible tsunami models, but may lack high-performance computing infrastructure. The goal of this work is to leverage the newly developed Google's Tensor Processing Unit (TPU), a high-performance hardware accessible via the Google Cloud framework, to enable the rapid evaluation of different tsunami-risk mitigation strategies available to all communities. We establish a starting point through a numerical solver of the nonlinear shallow-water equations that uses a fifth-order Weighted Essentially Non-Oscillatory method with the Lax-Friedrichs flux splitting, and a Total Variation Diminishing third-order Runge-Kutta method for time discretization. We verify numerical solutions through several analytical solutions and benchmarks, reproduce several findings about one particular tsunami-risk mitigation strategy, and model tsunami runup at Crescent City, California whose topography comes from a high-resolution Digital Elevation Model. The direct measurements of the simulations performance, energy usage, and ease of execution show that our code could be a first step towards a community-based, user-friendly virtual laboratory that can be run by a minimally trained user on the cloud thanks to the ease of use of the Google Cloud Platform.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(4846 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4846 KB) - Metadata XML
- BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-116', Ilhan Özgen-Xian, 27 Feb 2023
Summary
The authors explore the use of Google's TPUs for hydrodynamic simulations with application to tsunami modelling. They present a case study of Crescent City, CA, USA. The performance of the model is convincing. The application on TPUs is novel and interesting. Another perceived novelty for me is the evaluation of ease of execution, which is often left out of the discussion when analysing research code.
I recommend moderate revision (small additional simulations requested). Please see my comments below.
Comment #1: In the equations 1–3, the non-linear advection term from page 4 seems to be contained in the term 0.5 (h2 - b2)? It would help the reader to point out this term in these equations.
Comment #2: Can the authors comment further on the trade-offs of using a high-order scheme with an arguably large stencil with regard to parallel performance, numerical accuracy, and memory? This could be added to the discussion on page 22.
Comment #3: Can the authors give a bit more detail on the numerical treatment at shocks and at wet/dry fronts?
Comment #4: In terms of validation, it would be nice to have an empirical proof of grid convergence and test of convergence rate for the analytical cases (Cases 2.1—2.4). The authors should run simulations with successively refined grids and report L-norms and convergence rates. Tables of L-norms could be provided as an Appendix.
Comment #5: I feel that the beginning of Section 3.1 discussing the benefits of TPUs for communities with no access to HPC facilities should be moved to the introduction, because it is a good motivation for the conducted research. In that context, Behrens et al. (2022) also suggested cloud computing as a possible alternative to HPC facilites. Perhaps it's interesting to the authors.
Behrens et al. (2022). doi: 10.3389/feart.2022.762768
Comment #6: Can the authors comment on the process of getting access to Google's TPUs? From the website, the cloud service seems to be a paid service. Is it similar to renting time on an AWS or Microsoft Azure?
Comment #7: In section 3.3, the authors should briefly report the formal accuracy of GeoClaw.
Comment #8: I suggest that some part of the discussion could be separated as conclusions. I think the part starting with "Though just a starting point ..." on about line 359 on page 22 marks the end of discussion of results and starts the conclusions and outlook. But the authors may disagree.
Citation: https://doi.org/10.5194/egusphere-2023-116-RC1 -
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
-
RC2: 'Comment on egusphere-2023-116', Anonymous Referee #2, 28 Mar 2023
The research paper presents a numerical model oriented to tsunami simulations based on the SWE. This is a very important topic in many areas around the world since the consequences of this phenomena are devastating. Therefore, the administrations are continuously leveraging different options to mitigate their catastrophic effects. The numerical modeling is an effective tool to evaluate the possible effects in the area and capacity of different measures to mitigate them. However, the high computational cost may be a barrier for some communities where they don't have access to expensive HPC facilities and/or lack the knowledge to employ them. This model addresses this issue by taking advantage of the google cloud TPU hardware to accelerate the simulations. In this way the user can easily access infrastructures with high computational capabilities. The code of the numerical model is available under a free software license so it may be an useful platform for many researchers as it can be easily adapted to their needs.
The article is perfectly suited to the scope of the journal. It is well written in general and the references provided are adequate. The introduction is good, the methodology is well explained and the results are remarkable. However, there are several points that should be addressed.
Major comments:
In my opinion the performance analysis section is the weakest part of the article. I think that it should be restructured and some parts rewritten or even removed. The results obtained are remarkable but they are not clearly analyzed.
In the section "3.1 Number of TPU cores", the scalability of the model is analyzed by running the Crescent City case with a different number of TPU cores (strong scaling, see https://hpc-wiki.info/hpc/Scaling). However this should be further elaborated. For sake of clarity, I suggest adding a row in Table 1 showing the speed up versus using a single TPU and then analyze the results observed. Also, a plot showing how the measured speed-up compares with an ideal scaling could be added.
There was no indication of the specific hardware employed. As far as I know there are several generations of TPU available with different capabilities (see https://ieeexplore.ieee.org/document/9499913). The reader may need this information in order to reproduce the experiments or to know what performance should be expected when using the model for their own application.
It is also unclear how the time was measured, the manuscript states that "we first measure the wall-clock time of a simulation" and the table caption adds: "This runtime excludes transfer times between the CPU and TPU". Does this mean that the preprocessing was excluded and only the main loop of the simulation was measured? Please, clarify.
In section 3.2 the convergence and runtime were measured varying the spatial resolution for the Crescent City and the ISEC benchmark cases but the measurements are not properly analyzed and discussed. It looks like there is not a clear convergence, for example In the ISEC benchmark the error with 2m is lower than 1m, also it is lower using 8m than with 4m resolution. How do you interpret these observations? It looks that the errors in these resolutions vary in similar error ranges. In my opinion, even lower resolution values should be tested (e.g. 16m 25m…) to check when the error increases significantly. The Crescent City case does not have an analytical solution, but following the observations of the ISEC benchmark I would not say that the results with 2m resolution are the best, in this case and in my opinion tests with a much coarse resolution should be also tested.
It would be also interesting to analyze how the runtimes scale with the number of the elements of the simulation. For example, in the Crescent City the 2m case has 16x elements than the 8m case but the run time only increases 11x. In the ISEC benchmark however, from 8m to 2m, the run time is only 1.7x higher. How do you interpret this?
In tables 2 and 3 an "Efficiency" row is shown, however there are no clues on the text on what these magnitudes represent or how it was calculated.
In section 3.3, the proposed model is compared with GeoClaw, however some key information is missing like the resolution used and how many TPU cores were employed in this comparison.
In my opinion Section 3.4 is the most controversial part of the article. I'm really happy to see this analysis in the article since this is a very important topic that usually is not covered in this kind of article. However, the energy utilization estimations were made in such a naive way that the conclusions may be completely wrong:
At this point we don't have any information about the generation of the TPU employed for the research. This is a key point since the energy efficiency varies with the chip architecture and integration process. The number of TPU cores for this estimation was not specified also.
At line 298 there is a vague statement of "2 trillion operations per second (TOPS) per Watt" without specific reference. I found a reference to this in https://cloud.google.com/tpu/docs/tpus but it refers to the "Edge TPU" series that as far as I know are a different product to the TPUs used in the Google cloud. The TOPS concept itself is vague and could mean 8-bit integer or 16-bit floating point operations since they are very useful for machine learning operations that are the main purpose for this technology.
The authors state that 11.9 million floating point operations per time step were used, but it is unclear how they estimated this number. If I am correct, this is nearly 7 operations per element to solve SWE that looks quite low. The floating point precision used by the authors for numerical computations is not specified, taking a look at the code seems to use FP32. I haven't found any data on the TPU throughput in FP32. In https://ieeexplore.ieee.org/document/9499913 it is stated that TPUv3 has a throughput of 123 TFLOPS in BF16 with a TDP of 450W per chip (and 2 cores per chip) so in FP32 the throughput should be lower (half?). This leads us to a power usage one order of magnitude higher than stated in the paper. However, this may be wrong since the throughput values are rarely reached in real world application so the power usage would be even higher. Another approach, would be considering that the TPUv3 has 2 cores per chip and 8 cores were used for the 8m simulation during 338 seconds running at the TDP lead us to roughly 165Wh, several orders of magnitude higher than stated in the article.
The CPU power usage analysis has issues as well but I will not cover it in detail. It considers that a CPU that supports 8 threads (but only has 4 cores) with a TDP of 15W will use 15/8 W when using a single thread, which is completely wrong.
Therefore, this section (and the corresponding paragraph in the discussion) should be rewritten or removed prior to publication.
Minor comments:
Figure 1: please add a north arrow.
Line 88: a properly referenced link about the usage of Google Cloud TPU is encouraged. Maybe a whitepaper, an article or some documentation.
Lines 161, 163, 166, 445: The year of publication is missing in the Carson reference (2002 if I am correct).
Eq. 9: Please, clarify why these two metrics were chosen. Explain how differently the both measure the relative error.
Line 275: Missing "Fig." in: …depicted graphically in (Fig.) 10 and in table 2.
Figure 10 and 11: The colors in the figure caption (purple and blue) are not the colors present in the figure (black, turquoise and lime).
Figure 11: The caption only describes the lower right panel. Also the chosen time instants of the simulation were not specified.
Citation: https://doi.org/10.5194/egusphere-2023-116-RC2 -
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
-
RC3: 'Comment on egusphere-2023-116', Anonymous Referee #3, 13 Apr 2023
The paper presents results found when solving shallow water equations in the context of tsunami propagation and inundation. I find the paper to be a relevant contribution to this subject area.
The authors present results of a case study in Crescent City, CA. They should expand their discussion of how the results can be used by mitigation planners. For example, they could show a map which indicates the high-water mark reached by the tsunami. But that clearly is directly related to the initial wave height (boundary condition). Since earthquakes can be any magnitude, the height of a resulting tsunami can vary widely. Given that, it is not clear how such a simulation is useful to planners. It could be that the steepness of the terrain is such that as the wave height increases the corresponding high water marks start to converge. A plot showing successive high water marks would be a very informative and interesting addition. Furthermore, if the relationship between the earthquake magnitude and the wave height is well known given the tectonics offshore of Crescent City, the high water marks could be labeled with the magnitude of the earthquake.
Why does the error go down if the grid size is increased? I suggest the authors either explain this phenomena or further increase the grid size until the error is clearly increasing.
Scaling data should include relative numbers, e.g., the ratio of the 2 cores result to the 1 core result.
Minor points follow.
"leverage the newly developed Google’s Tensor Processing Unit (TPU)" should be "leverage Google’s newly developed Tensor Processing Unit (TPU)", although "newly" may be inapt, given the first version was developed 8 years ago.
Please check references, for example, line 38 should have (Gordon, 2012).
Earthquake should not be hyphenated (line 95).
Typo: line 228 has a typo "t =, "
Typo: line 242 "that that"
The authors should explain what is ∆Ω in Eq. 9.
The number of simulated seconds should be mentioned re: Table 1.
Check table and figure references, e.g. line 275 should have "Figure 10" not "10"; line 281 "Table 3"
The authors tables have too many digits. It is irrelevant to have so many digits. In most cases a few is appropriate.
Fig. 10: line colors referenced are incorrect.
The authors should use scientific notation in some cases and not use it in other cases. More than 1 leading zeros: yes. No leading zeros with small magnitude number: no. Examples: 0.000749 --> 7.5E-04 4.76 × 10^2 --> 476
Project Safe Havens should be Project Safe Haven
In the discussion (more than once) "tsunami simulation" should be “a tsunami simulation” or “tsunami simulations”
In their discussion of efficiency as relates to climate change, the chip efficiency is not the only concern. The authors may want to mention that Google Cloud purchases enough renewable energy to cover their entire operations. https://cloud.google.com/blog/topics/inside-google-cloud/announcing-round-the-clock-clean-energy-for-cloud
Citation: https://doi.org/10.5194/egusphere-2023-116-RC3 -
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-116', Ilhan Özgen-Xian, 27 Feb 2023
Summary
The authors explore the use of Google's TPUs for hydrodynamic simulations with application to tsunami modelling. They present a case study of Crescent City, CA, USA. The performance of the model is convincing. The application on TPUs is novel and interesting. Another perceived novelty for me is the evaluation of ease of execution, which is often left out of the discussion when analysing research code.
I recommend moderate revision (small additional simulations requested). Please see my comments below.
Comment #1: In the equations 1–3, the non-linear advection term from page 4 seems to be contained in the term 0.5 (h2 - b2)? It would help the reader to point out this term in these equations.
Comment #2: Can the authors comment further on the trade-offs of using a high-order scheme with an arguably large stencil with regard to parallel performance, numerical accuracy, and memory? This could be added to the discussion on page 22.
Comment #3: Can the authors give a bit more detail on the numerical treatment at shocks and at wet/dry fronts?
Comment #4: In terms of validation, it would be nice to have an empirical proof of grid convergence and test of convergence rate for the analytical cases (Cases 2.1—2.4). The authors should run simulations with successively refined grids and report L-norms and convergence rates. Tables of L-norms could be provided as an Appendix.
Comment #5: I feel that the beginning of Section 3.1 discussing the benefits of TPUs for communities with no access to HPC facilities should be moved to the introduction, because it is a good motivation for the conducted research. In that context, Behrens et al. (2022) also suggested cloud computing as a possible alternative to HPC facilites. Perhaps it's interesting to the authors.
Behrens et al. (2022). doi: 10.3389/feart.2022.762768
Comment #6: Can the authors comment on the process of getting access to Google's TPUs? From the website, the cloud service seems to be a paid service. Is it similar to renting time on an AWS or Microsoft Azure?
Comment #7: In section 3.3, the authors should briefly report the formal accuracy of GeoClaw.
Comment #8: I suggest that some part of the discussion could be separated as conclusions. I think the part starting with "Though just a starting point ..." on about line 359 on page 22 marks the end of discussion of results and starts the conclusions and outlook. But the authors may disagree.
Citation: https://doi.org/10.5194/egusphere-2023-116-RC1 -
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
-
RC2: 'Comment on egusphere-2023-116', Anonymous Referee #2, 28 Mar 2023
The research paper presents a numerical model oriented to tsunami simulations based on the SWE. This is a very important topic in many areas around the world since the consequences of this phenomena are devastating. Therefore, the administrations are continuously leveraging different options to mitigate their catastrophic effects. The numerical modeling is an effective tool to evaluate the possible effects in the area and capacity of different measures to mitigate them. However, the high computational cost may be a barrier for some communities where they don't have access to expensive HPC facilities and/or lack the knowledge to employ them. This model addresses this issue by taking advantage of the google cloud TPU hardware to accelerate the simulations. In this way the user can easily access infrastructures with high computational capabilities. The code of the numerical model is available under a free software license so it may be an useful platform for many researchers as it can be easily adapted to their needs.
The article is perfectly suited to the scope of the journal. It is well written in general and the references provided are adequate. The introduction is good, the methodology is well explained and the results are remarkable. However, there are several points that should be addressed.
Major comments:
In my opinion the performance analysis section is the weakest part of the article. I think that it should be restructured and some parts rewritten or even removed. The results obtained are remarkable but they are not clearly analyzed.
In the section "3.1 Number of TPU cores", the scalability of the model is analyzed by running the Crescent City case with a different number of TPU cores (strong scaling, see https://hpc-wiki.info/hpc/Scaling). However this should be further elaborated. For sake of clarity, I suggest adding a row in Table 1 showing the speed up versus using a single TPU and then analyze the results observed. Also, a plot showing how the measured speed-up compares with an ideal scaling could be added.
There was no indication of the specific hardware employed. As far as I know there are several generations of TPU available with different capabilities (see https://ieeexplore.ieee.org/document/9499913). The reader may need this information in order to reproduce the experiments or to know what performance should be expected when using the model for their own application.
It is also unclear how the time was measured, the manuscript states that "we first measure the wall-clock time of a simulation" and the table caption adds: "This runtime excludes transfer times between the CPU and TPU". Does this mean that the preprocessing was excluded and only the main loop of the simulation was measured? Please, clarify.
In section 3.2 the convergence and runtime were measured varying the spatial resolution for the Crescent City and the ISEC benchmark cases but the measurements are not properly analyzed and discussed. It looks like there is not a clear convergence, for example In the ISEC benchmark the error with 2m is lower than 1m, also it is lower using 8m than with 4m resolution. How do you interpret these observations? It looks that the errors in these resolutions vary in similar error ranges. In my opinion, even lower resolution values should be tested (e.g. 16m 25m…) to check when the error increases significantly. The Crescent City case does not have an analytical solution, but following the observations of the ISEC benchmark I would not say that the results with 2m resolution are the best, in this case and in my opinion tests with a much coarse resolution should be also tested.
It would be also interesting to analyze how the runtimes scale with the number of the elements of the simulation. For example, in the Crescent City the 2m case has 16x elements than the 8m case but the run time only increases 11x. In the ISEC benchmark however, from 8m to 2m, the run time is only 1.7x higher. How do you interpret this?
In tables 2 and 3 an "Efficiency" row is shown, however there are no clues on the text on what these magnitudes represent or how it was calculated.
In section 3.3, the proposed model is compared with GeoClaw, however some key information is missing like the resolution used and how many TPU cores were employed in this comparison.
In my opinion Section 3.4 is the most controversial part of the article. I'm really happy to see this analysis in the article since this is a very important topic that usually is not covered in this kind of article. However, the energy utilization estimations were made in such a naive way that the conclusions may be completely wrong:
At this point we don't have any information about the generation of the TPU employed for the research. This is a key point since the energy efficiency varies with the chip architecture and integration process. The number of TPU cores for this estimation was not specified also.
At line 298 there is a vague statement of "2 trillion operations per second (TOPS) per Watt" without specific reference. I found a reference to this in https://cloud.google.com/tpu/docs/tpus but it refers to the "Edge TPU" series that as far as I know are a different product to the TPUs used in the Google cloud. The TOPS concept itself is vague and could mean 8-bit integer or 16-bit floating point operations since they are very useful for machine learning operations that are the main purpose for this technology.
The authors state that 11.9 million floating point operations per time step were used, but it is unclear how they estimated this number. If I am correct, this is nearly 7 operations per element to solve SWE that looks quite low. The floating point precision used by the authors for numerical computations is not specified, taking a look at the code seems to use FP32. I haven't found any data on the TPU throughput in FP32. In https://ieeexplore.ieee.org/document/9499913 it is stated that TPUv3 has a throughput of 123 TFLOPS in BF16 with a TDP of 450W per chip (and 2 cores per chip) so in FP32 the throughput should be lower (half?). This leads us to a power usage one order of magnitude higher than stated in the paper. However, this may be wrong since the throughput values are rarely reached in real world application so the power usage would be even higher. Another approach, would be considering that the TPUv3 has 2 cores per chip and 8 cores were used for the 8m simulation during 338 seconds running at the TDP lead us to roughly 165Wh, several orders of magnitude higher than stated in the article.
The CPU power usage analysis has issues as well but I will not cover it in detail. It considers that a CPU that supports 8 threads (but only has 4 cores) with a TDP of 15W will use 15/8 W when using a single thread, which is completely wrong.
Therefore, this section (and the corresponding paragraph in the discussion) should be rewritten or removed prior to publication.
Minor comments:
Figure 1: please add a north arrow.
Line 88: a properly referenced link about the usage of Google Cloud TPU is encouraged. Maybe a whitepaper, an article or some documentation.
Lines 161, 163, 166, 445: The year of publication is missing in the Carson reference (2002 if I am correct).
Eq. 9: Please, clarify why these two metrics were chosen. Explain how differently the both measure the relative error.
Line 275: Missing "Fig." in: …depicted graphically in (Fig.) 10 and in table 2.
Figure 10 and 11: The colors in the figure caption (purple and blue) are not the colors present in the figure (black, turquoise and lime).
Figure 11: The caption only describes the lower right panel. Also the chosen time instants of the simulation were not specified.
Citation: https://doi.org/10.5194/egusphere-2023-116-RC2 -
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
-
RC3: 'Comment on egusphere-2023-116', Anonymous Referee #3, 13 Apr 2023
The paper presents results found when solving shallow water equations in the context of tsunami propagation and inundation. I find the paper to be a relevant contribution to this subject area.
The authors present results of a case study in Crescent City, CA. They should expand their discussion of how the results can be used by mitigation planners. For example, they could show a map which indicates the high-water mark reached by the tsunami. But that clearly is directly related to the initial wave height (boundary condition). Since earthquakes can be any magnitude, the height of a resulting tsunami can vary widely. Given that, it is not clear how such a simulation is useful to planners. It could be that the steepness of the terrain is such that as the wave height increases the corresponding high water marks start to converge. A plot showing successive high water marks would be a very informative and interesting addition. Furthermore, if the relationship between the earthquake magnitude and the wave height is well known given the tectonics offshore of Crescent City, the high water marks could be labeled with the magnitude of the earthquake.
Why does the error go down if the grid size is increased? I suggest the authors either explain this phenomena or further increase the grid size until the error is clearly increasing.
Scaling data should include relative numbers, e.g., the ratio of the 2 cores result to the 1 core result.
Minor points follow.
"leverage the newly developed Google’s Tensor Processing Unit (TPU)" should be "leverage Google’s newly developed Tensor Processing Unit (TPU)", although "newly" may be inapt, given the first version was developed 8 years ago.
Please check references, for example, line 38 should have (Gordon, 2012).
Earthquake should not be hyphenated (line 95).
Typo: line 228 has a typo "t =, "
Typo: line 242 "that that"
The authors should explain what is ∆Ω in Eq. 9.
The number of simulated seconds should be mentioned re: Table 1.
Check table and figure references, e.g. line 275 should have "Figure 10" not "10"; line 281 "Table 3"
The authors tables have too many digits. It is irrelevant to have so many digits. In most cases a few is appropriate.
Fig. 10: line colors referenced are incorrect.
The authors should use scientific notation in some cases and not use it in other cases. More than 1 leading zeros: yes. No leading zeros with small magnitude number: no. Examples: 0.000749 --> 7.5E-04 4.76 × 10^2 --> 476
Project Safe Havens should be Project Safe Haven
In the discussion (more than once) "tsunami simulation" should be “a tsunami simulation” or “tsunami simulations”
In their discussion of efficiency as relates to climate change, the chip efficiency is not the only concern. The authors may want to mention that Google Cloud purchases enough renewable energy to cover their entire operations. https://cloud.google.com/blog/topics/inside-google-cloud/announcing-round-the-clock-clean-energy-for-cloud
Citation: https://doi.org/10.5194/egusphere-2023-116-RC3 -
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
-
AC1: 'Comment on egusphere-2023-116', Ian Madden, 05 May 2023
We thank the reviewers for their many insightful comments, and the editor for inviting discussion. We have attached a formatted set of response letters to each reviewer, including a manuscript resubmission letter to the topical editor. These response letters include the specific revisions made to the paper in response to the suggestions of the reviewers.
Peer review completion
Journal article(s) based on this preprint
Model code and software
tsunamiTPUlab Ian Madden, Simone Marras, and Jenny Suckale https://github.com/smarras79/tsunamiTPUlab/releases/tag/v1.0.0
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
339 | 111 | 16 | 466 | 6 | 4 |
- HTML: 339
- PDF: 111
- XML: 16
- Total: 466
- BibTeX: 6
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Ian Madden
Simone Marras
Jenny Suckale
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4846 KB) - Metadata XML