the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
AI Image-based method for a robust automatic real-time water level monitoring: A long-term application case
Abstract. The study presents a robust, automated camera gauge for long-term river water level monitoring operating in near real-time. The system employs artificial intelligence (AI) for the image-based segmentation of water bodies and the identification of ground control points (GCPs), combined with photogrammetric techniques, to determine water levels from surveillance camera data acquired every 15 minutes. The method was tested at four locations over a period of more than 2.5 years. During this period over 219,000 images were processed. The results demonstrate a high degree of accuracy, with mean absolute errors ranging from 1.0 to 2.3 cm in comparison to official gauge references. The camera gauge demonstrates resilience to adverse weather and lighting conditions, achieving an image utilisation rate of above 95 % throughout the entire period. The integration of infrared illumination enabled 24/7 monitoring capabilities. Key factors influencing accuracy were identified as camera calibration, GCP stability, and vegetation changes. The low-cost, non-invasive approach advances hydrological monitoring capabilities, particularly for flood detection and mitigation in ungauged or remote areas, enhancing image-based techniques for robust, long-term environmental monitoring with frequent, near real-time updates.
- Preprint
(2743 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-724', Salvatore Manfreda, 18 Apr 2025
Brief Summary
This work presents an interesting and complex model for automatically estimating river water levels using AI. The method was tested over a period of more than two years, during both day and night, and the results appear to be reliable across various environmental and hydrological conditions.
General Evaluation
The manuscript is well-structured and addresses the important field of hydrological monitoring through image-based systems. It demonstrates some improvements over previous methods, particularly in terms of precision and long-term reliability of water level estimation.
I have only a few minor comments, questions, and suggestions aimed at improving clarity and readability.
As a general suggestion, I recommend adding an Appendix at the end of the manuscript—before the References section—with a description of all acronyms, indices, units, and other technical terms that are used but not explained in the text. This would be very helpful for readers.
Specific Comments
- Page 4, Line 125: Move Figure 1 to the appropriate section “2.1 Study Area”.
- Page 6, Line 150: What does “HWIMS” mean in Table 1?
- Page 7, Line 186: In Figure 3, there are some unexplained acronyms (RIWA, KIWA?). Please include these in the proposed Appendix. Additionally, consider moving this figure to the beginning of the Methodology section, as it represents a flowchart of the entire procedure.
- Page 8, Line 192: In Figure 4, clarify the meaning of UperNet and ResNeXt50. Also, move this figure to the relevant section 2.4 “AI Segmentation”.
- Page 9, Line 238: I noticed very high segmentation accuracy at night using IR. My question is: since the Sun also emits IR radiation, which is almost completely absorbed by water, would it be possible to simplify the procedure by using only IR intensities for water segmentation instead of training an AI model? If not, please highlight the advantages of your approach.
- Page 9, Line 246: You mention that “the GCPs need to be measured in each image every time.” Please explain more clearly the advantage of using GCPs instead of directly reading water levels from measuring tapes.
- Page 12, Line 332: MAE is a well-known metric; consider removing the formula or, if you decide to include it, format it properly rather than placing it within the text.
- Page 12, Line 341: Move Table 3 closer to its first mention on page 11.
- Page 12, Line 347: In Table 4, explain all abbreviations used, or include them in the Appendix.
- Page 13, Line 355: In Figure 5, clarify what the color scales indicate—are they related to segmentation, water levels, or something else?
- Page 14, Line 379: Move Figure 6 to appear before the Discussion section, as it presents results.
- Page 15, Line 391: You state: “For future installations with different cameras and environments, we expect that only a small number of manually labelled images will be needed to adapt the model to the new site.” I noticed that the selected river section features relatively uniform riverbed and bank characteristics. Is it accurate to say that a full retraining would not be needed even for sites with more diverse riverine environments? If so, please elaborate on this in the Discussion.
In this context, my final reflection is that while the model is very comprehensive, it involves numerous steps of image processing. Could this be overly complex for the sole purpose of water level estimation? You might consider expanding on this point, perhaps by highlighting other potential uses of your image-based 3D model in hydrological monitoring.
Final Recommendation
Based on the overall quality and depth of the manuscript, I recommend Minor Revision.
Citation: https://doi.org/10.5194/egusphere-2025-724-RC1 -
AC1: 'Reply on RC1', Xabier Blanch Gorriz, 14 Jul 2025
Dear Salvatore,
We would like to sincerely thank you for your thorough and constructive evaluation of our manuscript. We appreciate your positive feedback on our work and your valuable suggestions for improving its clarity.
Following your general suggestion, we will add an Appendix to the revised manuscript defining all acronyms, indices, units, and technical terms to enhance readability. We will also address all of your specific comments, and the corresponding changes will be incorporated into the new version of the manuscript.
While the revised manuscript will incorporate all of these changes, we would like to take this opportunity to provide extended responses to the three specific points you raised for further discussion below.
- Regarding the advantage of using GCPs instead of directly reading water levels from measuring tapes (Page 9, Line 246).
We appreciate this insightful question. From the outset, our project’s aim was to develop a truly non-invasive, contactless method for river monitoring. We believe that our GCP-based approach offers several key advantages over the use of measuring tapes, which will be described more explicitly and concisely in the revised manuscript, following these main points:
- Installation flexibility: Measuring tapes require installation in specific locations that are compatible with both low and high water levels, and must be in direct contact with the water. This significantly limits the possible deployment sites. In contrast, GCPs are placed outside the water and allow for greater flexibility, as georeferencing can be achieved from almost any stable feature within the camera’s field of view.
- Long-term reliability: Field equipment such as measuring tapes or even GCPs can degrade over time due to environmental factors like vegetation growth, freeze up, debris, sedimentation, or vandalism. In our multi-year study, some GCPs became partially covered or deteriorated, but the system continued to function optimally as long as a sufficient number of GCPs remained visible. In contrast, if a measuring tape becomes obscured or lost, water level readings are no longer possible.
- Methodological scalability: While measuring tapes only support water level estimation, GCP-stabilized imagery enables advanced hydrological measurements beyond water levels (e.g., surface flow velocity quantification)
- Towards a non-invasive development: Importantly, our methodology is not inherently dependent on the use of physical GCPs. In future implementations, we envision leveraging virtual GCPs (vGCPs)—that is, using fixed features already present in the scene with known 3D coordinates. This would eliminate the need for any physical installation (neither GCPs nor measuring tapes), further reducing the environmental impact and making the approach even more non-invasive.
- Concerning the generalization of our model to sites with more diverse riverine environments and whether full retraining would be needed (Page 15, Line 391).
Thank you for raising this point. We realize this aspect may not have been fully explained in the original manuscript, so we will make it clearer in the revised version. For training, we used the RIWA dataset, which contains a wide range of images with water in many different contexts—not just rivers (1163 images). This means that the model was already exposed to a lot of variety during training. In fact, only a small part of the training images came from the rivers analysed in our study (145 images, covering 4 study sites), and even with just a few of these, the model was able to segment river water correctly.
Of course, we can't guarantee exactly how the model will perform in every possible new river environment, but based on our results, we are confident that it generalizes well and can be adapted to new river sites only adding few well labelled images into the dataset. Although we never claim full transferability to new study areas, we appreciate the reviewer’s remark and will revisit the use of this term in the revised version.
- On the potential complexity of our approach for water level estimation and the possible additional uses of our image-based 3D model in hydrological monitoring.
Thank you for this observation. Although this aspect is not addressed in detail in the current article, it is true that image-based systems for 3D river monitoring have a broader scope within hydrological monitoring. The images and video sequences captured by the cameras are also used to estimate surface flow velocity and, subsequently, river discharge, allowing for hydrological monitoring without any direct contact with the river and by only one continuously operating sensor system.
However, these advanced measurements rely on having a robust photogrammetric setup and accurate water level estimation. For this reason, the focus of this article is specifically on these two foundational aspects. Since this point has also been raised by other reviewers, we will include a paragraph in the Introduction to clarify how our approach fits into the broader objectives of the KIWA project (Grundmann et al. 2024) and its potential for future applications in hydrological monitoring.
Once again, I would like to personally thank you for your thoughtful comments, which will undoubtedly help make the revised manuscript clearer and more accessible to a broader audience. I hope that our detailed responses have addressed your concerns and serve to better demonstrate the scope and significance of the work we have carried out.
Citation: https://doi.org/10.5194/egusphere-2025-724-AC1
-
RC2: 'Comment on egusphere-2025-724', Anonymous Referee #2, 11 May 2025
Brief Summary:
This study presents a complex methodology for automatically measuring river water levels. The authors collected a large number of images over a period of more than two years for evaluating model performances. While the results seem to be good, this manuscript reads like a technical report rather than a research paper. The innovation of this study is not so clear. Also, there is a lack of the comparison with the current measurement methods and deep learning models from previous studies in the literature. Based on the overall quality, the innovation and the completeness of experiments, I do not recommend publish this manuscript in HESS journal, that is a high-level journal in hydrology and water management.
Major comments:
- The innovation of this study is not so clear. You propose a method for more accurate/robust water level measurement, right (Line 115)? But there is no comparison with the current measurement methods, and deep learning models from the literature? In the introduction section, I found there are some the-state-of-the-art techniques or models (e.g., SAM model).
- There is a lack of report and analysis of accuracy on different scenarios (different weather, transparent water, vegetation cover….). This can be done by dividing test images into several subsets with different scenarios. Then, test the model on them to evaluate the generalization ability across different scenarios. Without this analysis, it is hard to say the method is robust.
- While so many images (200k) are used to evaluate models, the generalization ability of models to an unseen location is not explored or discussed. That is important for real operation, e.g., applying this model to a different river or a new country with limited labeled data and real water level measurement. That can be done by training models on one location and test the generalization to other locations. Otherwise, it is hard to say the method is robust.
- I found it is a quite complex method for automatically estimating river water levels. For example, one needs to build a 3D model using images from drones (Section 2.2) and measure the GCPs in each image every time (Line 246). That needs lots of expert knowledge, human labors, and specific equipment. What is the advantage of the method. It is hard to say it is “low-cost” in abstract and conclusion. Same with my first major comment, you need to estimate the cost of different methods in the literature. How about using a scale bar combined with the computer vision system? While it requires intervention in the river, it is low-cost and straightforward to show the water level without post-processing.
Minor comments:
- Lacking an overall introduction of the methodology before Section 2.2. Please present the procedure of the overall methodology, and the link with each method or Section (i.e., 2.2-2.5). It can be linked to the Fig. 2. By the way, you have two “Section 2.5”.
- Line 205, lack the names of 32 CNN architectures. Possibly report them in Appendix.
- Line 225, what data augmentation technique do you use. Provide details.
- Line 225, what is the size of images used for training and testing? Raw images or image patch (512*512) or both? It is unclear. Why not perform data augmentation on raw images?
- I suggest labelling subfigure (a), (b)…on the Fig.2, and present the specific the subfigure in the text, rather than Fig. 2 for more readable (e.g., in Line 245).
- Line 225, you did not state the model specifically trained on KIWA images above. Is it the model trained on night images along? The readers miss the information of this model. How you train this model? How many training images? What model architecture? They are not mentioned before.
- Lack an overall description of Fig. 3 in the text.
- Line 30, before stating the advantages of images-based techniques, I suggest stating the state-of-the-art of the non-image-based techniques and their applications and disadvantages
- Line 35, provide several examples of these applications, not just state "water management", which is a very big scope
- Line 45, what is hydrological monitoring? provide some examples
- Line 55, what is traditional methods? Please clarify
- Line 55, cameras also need continuous operation, such as changing the battery or SD memory card.
- Line 55, you provided some citaions. Do they use camera and AI? Please clarify what is the automatic measurement here?
- Line 55, How much is the affect of this intervention (i.e., scale bar)? It could be a small stick installed near the bank, with little affect to the river,right? Please clarify it
- Line 115, you said “their accuracy limitations make them unsuitable for reliable water level monitoring”. what is the detailed accuracy of the previous studies. Show metrics. And what level of accuracy is suitable?
Citation: https://doi.org/10.5194/egusphere-2025-724-RC2 -
AC2: 'Reply on RC2', Xabier Blanch Gorriz, 14 Jul 2025
Dear Reviewer #2,
We would like to sincerely thank Reviewer #2 for the time and effort dedicated to reviewing our manuscript. We regret that you consider the article unsuitable for publication in HESS. However, we are confident that our work is innovative and represents a significant advancement in automatic water level monitoring. As we will detail further, this work is a foundational step towards comprehensive, non-intrusive, image-based river monitoring (including water level, flow, and discharge), and is therefore worthy of publication in HESS. In the revised manuscript, we will make an effort to highlight this innovation more explicitly. To our knowledge, existing literature on optical water level monitoring typically relies on hybrid approaches that combine image data with additional instrumentation such as LiDAR or radar, or use direct visual readings of measuring tapes captured in the images. Moreover, most of these studies are constrained to short time periods and highly controlled environments, where channel geometry and external conditions are well known in advance.While we respect your critical perspective, we believe some of your requests—such as a "comprehensive state-of-the-art review of non image-based methods" or a "detailed cost comparison"—fall outside the scope of a research article focused on a specific methodological contribution. While these are absolutely valuable topics, they are better suited for specific review paper. Additionally, we do not share the view that our manuscript resembles a technical report. Although it necessarily contains highly technical sections due to the novel technology involved (AI and photogrammetry), we maintain that the paper demonstrates substantial scientific analysis and depth.
The vast majority of your minor comments will also be addressed in the revised manuscript. For specific requests, such as listing the 32 CNN architectures, we believe that the source is clearly cited, and reproducing material from another published work would not be appropriate. Below, we will provide detailed responses to your major comments, hoping our responses and revisions will clarify the value of our contribution
Comment #1
The core innovation of our work is the development and long-term testing of a fully non-invasive (contactless) optical monitoring system over more than two years in real river environments. This represents an important first step towards further advancements that can be achieved with image-based methods in hydrological monitoring. Unlike most previous studies—which often rely on controlled settings or require equipment placed directly in the water—our approach achieves high accuracy without any in-stream devices and has been validated across several real-world sites, not just in laboratory conditions or controlled water bodies. Honestly, authors consider that this is enough relevant for HESS publication.
We deliberately benchmarked our system against co-located, in-situ physical sensors, as this provides the most direct and meaningful validation of real-world accuracy. Comparing our results to other optical systems from the literature can be misleading, as performance is highly dependent on site-specific factors (e.g., camera resolution, lens type, distance). Our continuous, multi-year comparison against a physical reference offers a far more robust characterization of reliability. The water levels at the reference gauges are recorded using a float-type recorder and a pressure level sensor. It is a redundant monitoring system and is operated, maintained and quality controlled by the national monitoring agency. In our opinion it is the best reference you can get and it is independent from our work. We will add this information in the manuscript
Finally, our work focuses on integrating established deep learning models into a robust hydrological pipeline. We used validated models for specific sub-tasks: water segmentation (Wagner et al., 2023) and GCP detection (Blanch et al., 2025). These cited papers already provide detailed discussions of alternative architectures (including models like SAM) and the specific AI procedures used. Therefore, we do not believe it is necessary to reproduce that already-published content in this manuscript.
Comment #2
We thank the reviewer for this constructive suggestion. We agree that analyzing performance across different scenarios is essential for demonstrating robustness. While the reviewer suggests accomplishing this by creating explicit data subsets (breaking the long time series analysis), we argue that our study achieves this through a more holistic and, we believe, more realistic approach: analyzing the model's performance over a continuous, two-and-a-half-year time series.
This long-term evaluation inherently captures the very scenarios of concern—from seasonal vegetation growth to snow cover and low-flow conditions—and the manuscript already discusses their impact on accuracy. We maintain that this continuous analysis provides a more stringent and honest test of real-world operational reliability than evaluating isolated, pre-selected subsets. The seasonality of these challenges (e.g., vegetation in summer) acts as a natural data partitioning, and by publishing the daily data, we allow for a transparent assessment of how precision varies under these naturally occurring conditions. And we decided to specifically plot this variation of the absolute errors on Figure 6.
By reporting the mean absolute errors over the entire period (both for the first year and for the full study duration), we are providing the actual performance that would be expected in a real-world application. Artificially partitioning the dataset could, in optimal conditions, have led to reporting lower absolute errors than those published in the paper, but authors honestly believe that this would have created unrealistic expectations and not reflected the true overall performance achievable by the method. And we wouldn’t be able to detect changes of accuracies over time.
Comment #3
The authors would like to clarify that, in our view, “robustness” refers to the reliability and consistency of the method once it is installed at a given site, rather than to its direct transferability from one location to another. While transferability is certainly an interesting and relevant topic—one which we mention and briefly discuss in line 390 of the manuscript—we do not claim that the method can be deployed at new sites without any retraining.
What we do highlight is that the model achieved strong performance in each study area using only a small proportion of locally annotated images (about 11% of the total dataset, distributed across the four sites). Given the wide range of possible water bodies, installation types, environments, camera perspectives, and focal lengths, we consider it scientifically limited to present a transferability analysis based on just a few locations—a similar point is also addressed in our response to Reviewer 1, comment 2. In our view, it is much more transparent and scientifically honest to clearly state how many site-specific images were actually needed to achieve a high-performing AI model, rather than implying universal transferability without evidence.
Authors agree that it would be very interesting to analyze the transferability of this part of the process in detail, but such a specific analysis would require testing the method across 10?, 20?, or even more different types of rivers and camera combinations, which is clearly—once again—beyond the scope of this article.
Comment #4
We appreciate this comment and fully acknowledge the need to improve the motivation and context in the introduction of the manuscript. We also thank the reviewer for recognizing the level of expert knowledge required for this approach. As already discussed in our responses to Reviewer #1 (comments 1 and 3), the use of non-invasive, contactless optical methods for water level estimation is part of a broader research project (KIWA project, cited in the paper (Grundmann et al. 2024)). Once a robust and precise photogrammetric setup is established—as detailed in this article—the same camera system can also be used to record video clips, which enables the estimation of river flow velocity and, ultimately, discharge. These developments will be addressed in future publications, completing a comprehensive set of non-invasive, image-based tools for river monitoring.
The benefits and rationale for using contactless methods, as opposed to in-stream equipment such as scale bars, have already been detailed in our previous responses (reviewer #1). In summary, our approach is designed not only for water level estimation but also as a foundation for more advanced hydrological monitoring, all while minimizing environmental impact and the need for physical intervention in the river. Finally, while we recognize that a certain level of technical development is required, but in scientific research this should never be considered a limitation if it leads to benefits or significant advances in monitoring capabilities -without in any way diminishing the value of other methods.
Citation: https://doi.org/10.5194/egusphere-2025-724-AC2
-
RC3: 'Comment on egusphere-2025-724', Riccardo Taormina, 30 May 2025
This paper presents a technically solid framework for long-term, image-based water level monitoring, combining AI-driven segmentation with photogrammetric reconstruction. The system achieves strong accuracy and contributes meaningfully to camera-based hydrological observation. However, the paper has several critical issues.
It often reads more like a technical report than a scientific article, with an overly descriptive structure and limited analytical framing. Despite claims of transferability, the approach relies heavily on infrastructure-intensive steps. Moreover, the paper’s objective is somewhat inconsistently framed: it alternates between aiming for sub-centimeter accuracy and claiming flood detection as the primary goal—a task that could likely be achieved with a simpler system. Finally, the comparison with prior work and alternative techniques lacks some rigor, and the evaluation metrics would benefit from clearer contextualization.MAIN ISSUES
1. Scientific Article vs. Scientific Report
While the technical work is solid, the manuscript often reads more like a project report than a scientific article. It lacks a focused research question, has an overly descriptive tone, and does not critically reflect on methodological generalizability—especially beyond the AI component.2. Transferability is Overstated
The authors claim high model transferability, but the full system depends on site-specific 3D modeling, GCP placement, camera calibration, and environmental maintenance (e.g., vegetation control). The workflow’s performance degrades over time without intervention. The authors should distinguish between model-level transferability (e.g., CNN reuse) and system-level replicability, which is far more constrained.3. Ambiguity of Goal
Around line 455, the authors state that the system’s primary goal is flood detection. However, the efforts seem instead directed on achieving very high accuracy (i.e., ≤2cm in line with German standards). I gather these are distinct goals with different technical demands. If the intended application is flood detection, simpler models could likely achieve acceptable performance without the need for full 3D reconstruction.4. Comparisons
The comparisons made to prior work are not entirely fair or consistent. Don't the datasets differ in river conditions, locations, and environmental conditions? Moreover, some of the simpler methods perform comparably in terms of key metrics. If the actual goal is flood detection, why shouldn’t we use these methods instead? Furthermore, given the costly efforts in maintenance, how does this method compare against non-computer vision based approaches? What are the real costs/benefits of deploying and maintaining this system vs alternatives?5. Evaluation Metrics and Goals
The paper reports absolute errors and Spearman correlations but lacks relative error analysis or discussion of operational thresholds. Again, if flood detection is the goal, is it really necessary to optimize for every centimeter? Perhaps the authors should provide more context about what performance is “good enough” for different use cases?
MINOR COMMENTS> Several figures (e.g., Figures 5, 6, and 8) are difficult to interpret due to visual clutter or low resolution. Figure 5 tries to convey too many sites at once; Figure 6 requires zooming to read. Consider simplifying visuals, focusing on a single site per panel, or moving supporting visuals to an appendix.
> Terms like “KIWA variables” are not broadly meaningful outside the context of this specific project. Using more general terminology would improve clarity and support reproducibility in other settings.
> The paper states that code and data are only available upon request. Given that the authors build on standard architectures (UPerNet, R-CNN), and claim reproducibility, it is unclear why the full codebase is not released. Sharing trained models and AI pipelines would improve transparency. Is this link to commercial efforts?
> Line 105 – The hyperlink for Wagner incorrectly includes the word “IN” as part of the link. This should be corrected for proper formatting and readability.
> Line 121 – The discussion of Ground Control Point (GCP) detection around this line appears without sufficient context.
> The introduction is long and could be trimmed. The contributions should be clearly stated toward the end of the introduction, rather than dispersed throughout. Also, I do not see sufficient framing with respect to other methods than Computer Vision.
> The section currently labeled Methodology also covers the study area and data. Consider renaming it to Methods and Materials to better reflect the content. Also, this section is the hardest to read and also gives the feeling we are reading a technical report, not a scientific paper.Citation: https://doi.org/10.5194/egusphere-2025-724-RC3 -
AC3: 'Reply on RC3', Xabier Blanch Gorriz, 14 Jul 2025
Dear Riccardo,
We sincerely thank you for your thorough and detailed evaluation of our manuscript. We appreciate the constructive feedback and we will address all minor comments in the revised version, including improvements to figures, terminology, and manuscript structure.
However, we would like to clarify that we are not comfortable with the implication of potential “commercial efforts” regarding code availability. The code is publicly accessible upon request and is already being used in various projects and by other groups. The author team will further discuss whether to make the full codebase openly available.
Given the overlap between some of your comments and those raised by Reviewer #2, we kindly suggest referring to our responses to Reviewer #2 for additional context and clarification regarding several key aspects of the manuscript. Additionally, below we provide specific responses to the main issues you have raised.
Scientific Article vs. Scientific Report
We appreciate the reviewer’s recognition that the technical work is solid. As we have mentioned in responses to previous reviewers, we will work in future versions of the manuscript to ensure it does not read as a project report. However, we do not fully agree that this characterization is fair. We believe the methodology is clearly explained, the results are tailored to the requirements of the scientific discipline (going beyond what would be expected in a project report), and the discussion is appropriate. The descriptive tone may reflect a more personal narrative of the work carried out, and we will review the manuscript to adjust verb tenses and improve scientific framing.
As we have also stated in response to other reviewers, it is not the objective of this publication to discuss the generalization of the AI components. These aspects are thoroughly detailed and discussed in the respective articles dedicated to these developments, which are properly cited in the text (Wagner et al., 2023; Blanch et al., 2025).
Transferability is Overstated
The authors are concerned about this impression, which we believe does not accurately reflect the content of the manuscript and is not entirely fair. At no point do we state that the objective of our development is the transferability of the system from one study area to another. In fact, the term "transferability" appears only four times in the manuscript:
- Line 202: To indicate that the lack of IR datasets means our dataset is NOT transferable (AI segmentation).
- Line 225: To state that the use of data augmentation facilitates the transferability of results (AI segmentation).
- Line 258: To indicate that the GCP detection method shows good transferability (as discussed in Blanch et al., 2025a).
- Line 390: "suggesting the model's high transferability," referring—as we have explained in previous responses—to the use of only 11% local images.
Transferability is not mentioned in the title, abstract, or conclusions. Suggesting possible high transferability of the AI segmentation model does not, in any way, directly or indirectly imply that the entire system is transferable, since, as the reviewer correctly points out, site-specific elements are required for each study area. Nowhere in the manuscript do we claim high system-level transferability, and in future versions we will review the content to avoid any such impression even more.
Ambiguity of Goal
We appreciate this comment and excuse for misleading you. We will rephrase this sentence in an updated version of the manuscript to be scientifically correct. The original objective of the project supporting this publication (KIWA: Artificial Intelligence for Flood Warning) was to explore the potential of AI tools for flood warning. In this regard we recognized quite early that we can achieve measurements with an accuracy in the centimetre range with the optical monitoring system, which allows for using the optical measurements for state updating of forecast models. In fact, because the original goal was on floods, we could afford not to focus on achieving high precision for the lowest water levels—these are not critical for high flood water levels. This is the reasoning behind our statement in line 455 that low-flow conditions do not represent a significant limitation for the system as a whole. Nevertheless, we will remove this specific reference to flood warning in order to avoid any confusion and better distinguish the scope of this publication from that of the broader project. As mentioned in our responses to previous reviewers, we will ensure that the motivation and scope of this work are better introduced and contextualized in a future revision.
Comparisons
We respectfully disagree with the suggestion that our comparisons are unfair. Based on the reviewer’s comment, it might seem that our manuscript criticizes or diminishes other methods, which was never our intention. We will review the manuscript to ensure this is not the case and will clarify in the introduction that our aim is to present a new method that addresses some of the limitations of previous approaches—particularly by offering a fully contactless solution.
We do not believe it is appropriate to compare our method directly with approaches focused solely on flood detection, as our development clearly extends beyond that specific application. As noted in our response to Reviewer #2, a comprehensive comparison with all non-image-based methods would be more suitable for a review article than for a research paper focused on a specific methodological contribution.
We would see the need for exhaustive comparisons if our work were limited to laboratory development under controlled conditions. However, we believe that benchmarking our system against official gauge stations, operating in the field for over two years, offers the most meaningful and rigorous evaluation.
Evaluation Metrics and Goals
We thank the reviewer for this technically focused comment. In the revised version of the manuscript, we will consider including an additional relative error metric and a discussion of operational thresholds, especially for low-flow conditions. However, we do not share the view that centimetre-level optimization is unnecessary for flood detection. We appreciate the recognition that the method could be used for flood detection without requiring the high level of precision we achieve. Nevertheless, our scientific approach is to optimize the photogrammetric process as much as possible, always aiming for the highest quality and accuracy. We believe that this pursuit of precision does not detract from the final objective. Furthermore, if the cameras are to be installed at sites with very shallow, wide river cross-sections centimetre accuracy would be necessary. Another scenario are cameras installed at larger distances; also, then highest precision would be needed to get sufficient water level values. Since we recognize that explicitly framing “flood detection” as the primary goal may be confusing, and that focusing on “water level monitoring” would provide a clearer context, we will reconsider how this aspect is presented in the revised manuscript.
Citation: https://doi.org/10.5194/egusphere-2025-724-AC3
-
AC3: 'Reply on RC3', Xabier Blanch Gorriz, 14 Jul 2025
Video supplement
Water level results at Lauenstein gauge station - KIWA Project Xabier Blanch et al. https://doi.org/10.5281/zenodo.14875801
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
728 | 203 | 31 | 962 | 15 | 36 |
- HTML: 728
- PDF: 203
- XML: 31
- Total: 962
- BibTeX: 15
- EndNote: 36
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1