the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
AI Image-based method for a robust automatic real-time water level monitoring: A long-term application case
Abstract. The study presents a robust, automated camera gauge for long-term river water level monitoring operating in near real-time. The system employs artificial intelligence (AI) for the image-based segmentation of water bodies and the identification of ground control points (GCPs), combined with photogrammetric techniques, to determine water levels from surveillance camera data acquired every 15 minutes. The method was tested at four locations over a period of more than 2.5 years. During this period over 219,000 images were processed. The results demonstrate a high degree of accuracy, with mean absolute errors ranging from 1.0 to 2.3 cm in comparison to official gauge references. The camera gauge demonstrates resilience to adverse weather and lighting conditions, achieving an image utilisation rate of above 95 % throughout the entire period. The integration of infrared illumination enabled 24/7 monitoring capabilities. Key factors influencing accuracy were identified as camera calibration, GCP stability, and vegetation changes. The low-cost, non-invasive approach advances hydrological monitoring capabilities, particularly for flood detection and mitigation in ungauged or remote areas, enhancing image-based techniques for robust, long-term environmental monitoring with frequent, near real-time updates.
- Preprint
(2743 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 08 May 2025)
-
RC1: 'Comment on egusphere-2025-724', Salvatore Manfreda, 18 Apr 2025
reply
Brief Summary
This work presents an interesting and complex model for automatically estimating river water levels using AI. The method was tested over a period of more than two years, during both day and night, and the results appear to be reliable across various environmental and hydrological conditions.
General Evaluation
The manuscript is well-structured and addresses the important field of hydrological monitoring through image-based systems. It demonstrates some improvements over previous methods, particularly in terms of precision and long-term reliability of water level estimation.
I have only a few minor comments, questions, and suggestions aimed at improving clarity and readability.
As a general suggestion, I recommend adding an Appendix at the end of the manuscript—before the References section—with a description of all acronyms, indices, units, and other technical terms that are used but not explained in the text. This would be very helpful for readers.
Specific Comments
- Page 4, Line 125: Move Figure 1 to the appropriate section “2.1 Study Area”.
- Page 6, Line 150: What does “HWIMS” mean in Table 1?
- Page 7, Line 186: In Figure 3, there are some unexplained acronyms (RIWA, KIWA?). Please include these in the proposed Appendix. Additionally, consider moving this figure to the beginning of the Methodology section, as it represents a flowchart of the entire procedure.
- Page 8, Line 192: In Figure 4, clarify the meaning of UperNet and ResNeXt50. Also, move this figure to the relevant section 2.4 “AI Segmentation”.
- Page 9, Line 238: I noticed very high segmentation accuracy at night using IR. My question is: since the Sun also emits IR radiation, which is almost completely absorbed by water, would it be possible to simplify the procedure by using only IR intensities for water segmentation instead of training an AI model? If not, please highlight the advantages of your approach.
- Page 9, Line 246: You mention that “the GCPs need to be measured in each image every time.” Please explain more clearly the advantage of using GCPs instead of directly reading water levels from measuring tapes.
- Page 12, Line 332: MAE is a well-known metric; consider removing the formula or, if you decide to include it, format it properly rather than placing it within the text.
- Page 12, Line 341: Move Table 3 closer to its first mention on page 11.
- Page 12, Line 347: In Table 4, explain all abbreviations used, or include them in the Appendix.
- Page 13, Line 355: In Figure 5, clarify what the color scales indicate—are they related to segmentation, water levels, or something else?
- Page 14, Line 379: Move Figure 6 to appear before the Discussion section, as it presents results.
- Page 15, Line 391: You state: “For future installations with different cameras and environments, we expect that only a small number of manually labelled images will be needed to adapt the model to the new site.” I noticed that the selected river section features relatively uniform riverbed and bank characteristics. Is it accurate to say that a full retraining would not be needed even for sites with more diverse riverine environments? If so, please elaborate on this in the Discussion.
In this context, my final reflection is that while the model is very comprehensive, it involves numerous steps of image processing. Could this be overly complex for the sole purpose of water level estimation? You might consider expanding on this point, perhaps by highlighting other potential uses of your image-based 3D model in hydrological monitoring.
Final Recommendation
Based on the overall quality and depth of the manuscript, I recommend Minor Revision.
Citation: https://doi.org/10.5194/egusphere-2025-724-RC1
Video supplement
Water level results at Lauenstein gauge station - KIWA Project Xabier Blanch et al. https://doi.org/10.5281/zenodo.14875801
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
162 | 49 | 7 | 218 | 6 | 4 |
- HTML: 162
- PDF: 49
- XML: 7
- Total: 218
- BibTeX: 6
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1