the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Stochastic properties of coastal flooding events – Part 1: CNN-based semantic segmentation for water detection
Byungho Kang
Rusty A. Feagin
Thomas Huff
Orencio Duran Vinent
Abstract. The frequency and intensity of coastal flooding is expected to accelerate in low-elevation coastal areas due to sea level rise. Coastal flooding due to wave runup affects coastal ecosystems and infrastructure, however it can be difficult to monitor in remote and vulnerable areas. Here we use a camera-based system to monitor wave runup as part of the after-storm recovery of an eroded beach on the Texas coast. We analyze high-temporal resolution images of the beach using Convolutional Neural Network (CNN)-based semantic segmentation to study the stochastic properties of runup-driven flooding events. In the first part of this work, we focus on the application of semantic segmentation to identify water and runup events. We train and validate a CNN with over 500 manually classified images, and introduce a post-processing method to reduce false positives. We find that the accuracy of CNN predictions of water pixels is around 90% and strongly depend on the number and diversity of images used for training.
Byungho Kang et al.
Status: open (extended)
-
RC1: 'Comment on egusphere-2023-231', Anonymous Referee #1, 11 May 2023
reply
The manuscript “Stochastic properties of coastal flooding events – Part 1: CNN-based semantic segmentation for water detection” describes the development and accuracy of a Convolutional Neural Network (CNN) model that automatically detects water on the beach. The authors describe the training methodology and develop a post-processing method based on the classification probability (e.g., water, background, or sky) that improves the accuracy of the model’s predictions. I find the paper to be well-written and concise.
This is a two-part submission, where Part 2 (also submitted to ESurf) uses the output from the CNN model developed here to evaluate flooding probabilities. I can see why the authors split the paper into two, as I believe aspects of the methodology are novel and there is a detailed validation process that would make one paper very long. However, I believe for this manuscript to be a standalone paper, the authors should contextualize the novelty of the developed algorithm (e.g., is this more automated and potentially faster than other classifying/ML models? Is the post-processing method unique?) and highlight the main contribution of the work to the field. Following on, it is hard to tell if this system was developed solely for validating the probabilistic analysis of Rinaldo et al., 2021 and what is described in Part 2 (Kang et al., in review). If that is the case, perhaps this is better as one paper. However, if the authors plan to use this technique in future applications, a discussion of the applicability/strengths to other sites could also help this manuscript stand alone. Some specific suggestions are found below.
General Comments:
The introduction section is missing material that contextualizes the broad use and long history of using optical remote sensing for coastal-based field work. The way that Lines 27 – 35 reads suggests that camera-based systems are new tools for determining wave runup/shoreline position. Remote sensing wave runup using cameras (i.e., optical remote sensing) has been around for over 20 years now (see Holman and Stanley, 2007) and the authors should at least mention these in their introduction to camera systems for context. Time stacks of wave runup from cameras (e.g., how the Stockdon et al., 2006 R2% wave parameterization was generated) have long been manually digitized or QA/QCed, although there are also detection methods using edge detection algorithms, intensity thresholds, and other filters (https://github.com/Coastal-Imaging-Research-Network/CIRN-Quantitative-Coastal-Imaging-Toolbox). Recently, the US Geological Survey has indeed also moved towards leveraging the strengths of machine learning (Palmsten et al., 2020). I think this information can strengthen the authors’ position that automated methods for classification are important.
Following on, other types of camera systems use geometric corrections and ground control points to measure elevations/water levels/wave runup. Can the authors explain and contextualize the strengths of a system that is only measuring the percent of beach flooded? Is this tool intended to be for long-term monitoring/deployment? Are the authors planning on using it for beach-flood validation in other locations?
Terminology: The authors motivate their work discussing “flooding due to wave runup” and understanding the properties of “runup-driven wave events.” I just want to make the point that the water levels the authors capture do indeed have runup in them, but they’re total water levels (runup + still water level) as the authors are not removing the still water level signal, so the flooding is generated by other components related to the still water level (e.g., storm surge, sea level anomaly) not just the wave runup.
Is the intent of this system to only look at low level events? I find the use of the terminology “coastal flooding” (in the title) a little broad and misleading, as it seems this is really looking at “beach flooding.”
Figure 9 displays a decreasing trend in the mean absolute percentage of error for different flooding % images and the number of training images. If the accuracy keeps decreasing, why not see where it “bottoms out”/becomes stable, i.e., where an increase in the number of training images or ratio of flooded images in training set doesn’t affect the MAPE anymore? Maybe the authors also want to include a plot here that shows the increase in computation time relative to variation in # training images to show why a certain value is chosen, rather than the maximum decrease in MAPE.
How computationally efficient is this entire effort?
Specific Lines:
Line 17 – 19: I am confused by this citation, as the Kang et al. paper’s (Part 2 of this work) focus is not this.
Figures 5 and 8 – open circles with closed circles plotted on top of them make the open circles hard to see.
Label Fig 8 and 9 with a,b,c; a,b, respectively instead of using “left” or “right.”
I didn’t see any data availability statement or a location where codes could be found – please see the journal’s Data Policy.
References:
Holman, R. A., & Stanley, J. (2007). The history and technical capabilities of Argus. Coastal engineering, 54(6-7), 477-491.
Palmsten, M., Birchler, J. J., Brown, J. A., Harrison, S. R., Sherwood, C. R., & Erikson, L. H. (2020, December). Next-generation tool for digitizing wave runup. In AGU Fall Meeting Abstracts (Vol. 2020, pp. EP051-10). https://ui.adsabs.harvard.edu/abs/2020AGUFMEP051..10P/abstract
Citation: https://doi.org/10.5194/egusphere-2023-231-RC1
Byungho Kang et al.
Byungho Kang et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
188 | 51 | 8 | 247 | 4 | 3 |
- HTML: 188
- PDF: 51
- XML: 8
- Total: 247
- BibTeX: 4
- EndNote: 3
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1