the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Operational hydrodynamic service as a tool for coastal flood assessment
Abstract. A comprehensive, high-resolution hydrodynamic operational service using XBeach model is presented and tested for three urban beaches in Barcelona, NW Mediterranean Sea. The operational architecture is based on Python scripts combined with task automation tools, ensuring a user-friendly system implemented on a standard desktop computer. Hydrodynamic validation of the model is carried out using data gathered during a field campaign in 2022, when a high-intensity storm occurred, resulting in a root mean square error of around 0.4 m and a skill score assessment index of 0.82. Flooding predictions were validated using videometry systems, yielding satisfactory Euclidean distances less than 5 m for storms close to the topobathymetry collection. For storms occurring years earlier, the distances ranged between 7–15 m, underscoring the need for regular topobathymetry updates to maintain forecasting accuracy. The operational system is designed to provide early-warning coastal flooding at three-days horizon. The service provides a warning system with a specific categorisation of the event, enabling the end-users to prepare for a possible flooding. The outcome assists in decision-making of such events by utilizing the operational system. The presented methodology is easily adaptable and replicable to meet user requirements or to be applied in other areas of interest.
- Preprint
(8223 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 23 Jan 2025)
-
RC1: 'Comment on egusphere-2024-3373', Anonymous Referee #1, 07 Jan 2025
reply
Operational hydrodynamic service as a tool for coastal flood assessment
Xavier Sánchez-Artús, Vicente Gracia, Manuel Espino, Manel Grifoll, Gonzalo Simarro, Jorge Guillén, Marta González, and Agustín Sanchez-Arcilla
Preprint egusphere-2024-3373
This paper presents the setup, validation, and application of an operational service for forecasting flood impacts on three urban beaches in Barcelona, Spain, using the XBeach model. Researchers highlight that the operational tool is designed for standard desktop computers to offer a user-friendly, high-resolution system with a three-day early-warning timescale. Validation of the tool is presented by comparing modelled results to a field campaign in March 2022. Video analysis is utilised to further validate model ability to predict flooding. The approach is applicable to other areas where such data is available. The paper is upfront in the drawbacks of the model and approach. Some recommendations for future research are made.
This research is a novel, local-scale application of XBeach to support decision making and flood preparedness in the coastal zone. The paper is well written, and contributes to the field of research on tools to improve coastal resilience. Additional attention is needed to clarify the justification for this tool, how flooding is interpreted and defined, and further validate the accuracy of the warning system. I list these key points below, in addition to numerous suggestions to improve readability.
Clarify the justification for this tool
The conclusion that bathymetry is important for accurate representation of shallow water wave processes at the coast is not groundbreaking, and would have been known from the outset. So the introduction needs to better set the justification and context for this type of tool, and why an operational model (which is reliant on accurate, up to date bathymetry) is applied here when that bathymetric data isn’t available or used.
Consider how other technologies could support delivery of more up to date bathymetries, or how morphological updates / outputs from XBeach could be used to feed into the next days simulation.
How flooding is interpreted and defined
It is not clear what metric is being used to define flooding, so it is difficult to grasp what you are actually presenting in respect to flooding results.
- Is it a maximum water level, where water level exceeds a critical threshold? Flooding occurs where maximum water levels surpass the ground surface elevation.
- Is it maximum landward extent of wave runup over the simulation? Flooding is defined when run-up exceeds coastal defences.
- Number of wet pixels averaged over the simulation? XBeach outputs maps of inundation depth at each grid cell. Is flooding defined as areas where water depth is above a specific threshold?
Would your ‘flooding’ results vary if a different approach is taken to define extent?
Further validate the accuracy of the warning system
I don’t think the discussion goes far enough to show how useful the operational tool is. Table 3 shows the warning system that would be assigned to each event that was simulated. It would be nice to see evidence of flooding presented e.g. did Storm Gloria (2020) cause substantial flood impacts (e.g. damage, disruption, need for evacuation)? Would this tool have averted disaster if a red warning was issued to residents 3 days in advance?
Table 3 would also benefit from a column showing maximum significant wave height or maximum storm tide height (tide + surge + max wave height) across the event, for easier comparison. Could you add column 3 and 5 from table 1 together?
Can you comment on whether it is a linear relationship between increasing storm tide and the warning level? What are the thresholds in forcing conditions that cause the warning level to change (i.e. become more severe)?
Abstract
L1: Begin the abstract with a sentence to introduce the context and need for this tool e.g. no sufficient tools are currently available, needed to avert storm impacts, improve preparedness of coastal communities.
L2: Explain what the operational tool will do. Move this sentence to earlier in the abstract: ‘The operational system is designed to provide early-warning coastal flooding at three-days horizon.’
L10: Explain if the warning system is tested / validated to demonstrate its usefulness.
Introduction
L22: The tool is useful to more than just stakeholders in the coastal zone. It would be useful for coastal managers, emergency services, landowners, businesses, residents. Surely something broader would be better here e.g. ‘Decision makers must have suitable tools to…’
Line 29: ‘Among others’ too vague. What other processes are important?
L33: ‘Various models’ too vague. Give examples e.g. numerical or machine learning models, and hindcast or forecast models?
L39: ‘Leeway’ – can this be quantified? How much time is needed realistically? What is the current lead time of forecast systems, and why does this need to be improved?
L53: ‘Challenges’ – it’s not entirely clear from the previous paragraph what these challenges actually are. Would be worth summarising here.
L57: The strategy uses hydrodynamic information previously computed by other models. I think you are referring to the CMEMs data, but it’s not clear here. A reference is needed to explain what model results you are referring to. If it is not the CMEMs data then what is it? What if a user doesn’t have access to previously completed modelling results? Does it make the whole approach unfeasible? If so, then details of these previously completed model runs must be included in the methodology too.
Methodology
The methodology needs a data framework diagram to show how the boundary conditions link up. The workflow in Fig 2 is great, but I’m still left unsure how all the boundary conditions and validation data link together. Could sit as a subplot in Fig 2.
L93: ‘Triggers are selected’ – what kind of triggers can be used? Water level, beach level?
L101: More detail needed on the XBeach set up. Appendix A gives details of the parameterisations, but not enough information is provided on the model domain. What is the size, extent, distance offshore? How many CMEMs data points are used to the force the model and where do the CMEM point(s) sit in respect to the offshore boundary?
L123: Does the three day model time include spin up? How is this accounted for?
L130: How is a flooded cell identified?
Flooding isn’t just based on area, but also the depth of the water. How is this accounted for?
Is there a minimum depth threshold?
A separate section on how flooding is defined, calculated, and identified would be a useful.
L134: 25% seems to be quite a high threshold for the green warning. Is there evidence to show that past evidence that would have triggered a green warning did not cause any flooding? How do you know this RAG rating is accurate, and these level of events cause a hazard?
L144: Can you suggest numerical impacts that could be included here?
L157: How does the storminess seen in the field campaign period compare relative to the longer term record e.g. within the last 30 years. State the return period of the events simulated, or the percentile each event represents when compared to model or observation data. E.g. does storm Celia represent a 90th percentile event, or a 50th percentile event? This would provide more context for the validation.
L164: Similar to above, quantify ‘major storm’
L167: Quantify ‘substantial coastal flooding’ – how much flooding, what were the impacts?
L168: Provide the exact dates
Figure 6: Annotate the storms, including aforementioned return period or percentiles, and exact dates.
L179: Amend this sentence to: ‘Therefore, for calm periods, both approaches are good’ just to clarify what you mean.
L181: Clarify what an ‘Argus-like station’ is.
L183: You selected one out of six available cameras. Would the results have been different if you used a different camera? Have you tested this? Clarify why you picked the one you did.
L186: Define a flooding line and how this is identified.
Results
Validation isn’t really a result. I would move this to its own section.
L226: remind the reader what M1 and M2 are here. Also consider giving them more obvious names. These make me think of tidal constituents which confused me. Use T1 or T2, standing for Tripod instead.
L229: could underestimation be because you are only using one CMEMs point to force the model. Would a space – varying boundary condition be more accurate? Consider giving a reason for this.
L231: 0.49 m is a big RMSE for a wave height range < 4 m. You should also present a mean bias to see if this error is consistent.
Section 3.2: The main results about the important of up-to-date bathymetry gets lost here. I have read this section several times and still can’t quite get my head around what you are trying to show. I think this is where you show that duration of time after last bathymetric survey reduces the accuracy of the model results. Can you plot how RMSE in model results increases with duration after last bathymetric survey? More detail is needed in L244 – 254 to explain when the last survey dates were relative to the model results.
L253: Add in date of: ‘with smaller distance discrepancies for storms closer to the date of topobathymetry gathering’.
L257: This is where a clearer definition of flooding would help. It’s difficult for the reader to visualise what you are describing here. I it depth or extent of flooding?
Section 3.3: See comments at start of review: you need to better say if the flood impacts did occur during Celia / Gloria. Did severe flooding happen when you predicted it would? Would this tool have averted flooding?
Table 3: What is a flooding percentage? Colour code the table rows with the alert warning colour. Add a column showing the written alert warning too. At the moment we have to flick back to page 6 to understand it all. make it easy for the reader to understand this, it’s your main result!
Discussion
Reiterate again at the start the overall aims of the research
Discussion focuses on the model performance, rather than on the usefulness and application of the tool. Can you infer the wave and water level conditions that would cause a red warning alert? Consider how these results are helpful for a decision maker and what information could be taken directly from this paper and applied to improve preparedness now.
What would you need to do to change this into a forecasting too? you use hindcast data, so how different would the setup be for forecasting? Would the same CMEMs data be suitable?
Have you considered how higher resolution mapping of intertidal areas, e.g. using a standard marine X-band radar could provide updated morphology (https://www.sciencedirect.com/science/article/abs/pii/S0169555X16306493; https://www.tandfonline.com/doi/full/10.1080/1755876X.2018.1526462)?
XBeach can evolve morphology over a simulation – would it be feasible to update the morphology in the model each day and then use this as input for subsequent runs? It would be challenging to validate without observation, but could be an additional use of the model.
Conclusion is fine as it is, but some of the additional analysis including mean bias and defining red alert conditions could be included.
Citation: https://doi.org/10.5194/egusphere-2024-3373-RC1 -
RC2: 'Comment on egusphere-2024-3373', Anonymous Referee #2, 15 Jan 2025
reply
General comments
Paper is generally well written and clear and describes an interesting local flooding toolSpecific comments
Section 1
Line 61: Can you elaborate on who are the stakeholders you are aiming this at? Eg Local government agencies, emergency responders, business owners, beach users?
Section 2.2
Line 92: I don’t believe it is available for Windows, but for linux and MacOS I would recommend looking at the cylc workflow engine (https://cylc.github.io/) which is more sophisticated, for example would allow the initial server request to be automatically retried if it fails (and the following tasks would then wait until the request succeeded).
Line 111: What was the reason for picking the IBI dataset over the MEDSEA one? What are the key differences between the two?
Line 116: What is the grid resolution of the CMEMS dataset? How near is the closest point?
Line 133: Is the magnitude/severity of the flooding impact always proportional to the area that is flooded? Could there be events where only a small proportion of cells are flooded but there is still a severe impact on infrastructure in that part of the beach?
Line 143: Prioritising graphical information is a sensible choice. I would also suggest reviewing the format and type of data provided with the stakeholders to ensure that it meets their needs and expectations.Section 2.3
Line 158: “Typically the hydrodynamic component of the model is not re-validated…” – I am not sure why this line is included since you do validate the hydrodynamics, would suggest removing as it is confusing.
Line 193: What is z? Can you explain a bit what the ULISES codes are/do?Section 3.1
Have you looked at all at how the model performance changes with forecast leadtime?Section 3.3
Can the % of flooded cells be calculated from the camera to compare to the model prediction? Were the correct colour alerts issued when compared against the observed flooding?Section 4.1
Line 319: Reference to “significant resolution difference” between the CMEMS and Xbeach models – what actually are the resolutions for both?Section 4.2
Line 351: Is the goal for the end users to run the system themselves on their network, or for it to be run eg by your institute and they just receive the output?Technical comments
Throughout – for a named storm the name should come first ie “storm Celia” and not “Celia storm”Section 1
Line 77: “being the median grain size” -> “with median grain size”Section 2.3
Line 166: “being the signification wave height” -> “with a significant wave height”Section 4
Line 281: beaches -> beaches’Citation: https://doi.org/10.5194/egusphere-2024-3373-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
103 | 34 | 6 | 143 | 2 | 3 |
- HTML: 103
- PDF: 34
- XML: 6
- Total: 143
- BibTeX: 2
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1