the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A low-cost approach to monitoring streamflow dynamics in small, headwater streams using timelapse imagery and a deep learning model
Abstract. Despite their ubiquity and importance as freshwater habitat, small headwater streams are under monitored by existing stream gage networks. To address this gap, we describe a low-cost, non-contact, and low-effort method that enables organizations to monitor streamflow dynamics in small headwater streams. The method uses a camera to capture repeat images of the stream from a fixed position. A person then annotates pairs of images, in each case indicating which image has more apparent streamflow or indicating equal flow if no difference is discernible. A deep learning modelling framework called Streamflow Rank Estimation (SRE) is then trained on the annotated image pairs and applied to rank all images from highest to lowest apparent streamflow. From this result a relative hydrograph can be derived. We found that our modelled relative hydrograph dynamics matched the observed hydrograph dynamics well for 11 cameras at 8 streamflow sites in western Massachusetts. Higher performance was observed during the annotation period (median Kendall’s Tau rank correlation 0.75 with range 0.6–0.83) than after it (median Kendall’s Tau 0.59 with range 0.34 – 0.74). We found that annotation performance was generally consistent across the eleven camera sites and two individual annotators and was positively correlated with streamflow variability at a site. A scaling simulation determined that model performance improvements were limited after 1,000 annotation pairs. Our model’s estimates of relative flow, while not equivalent to absolute flow, may still be useful for many applications, such as ecological modelling and calculating event-based hydrological statistics (e.g., the number of out-of-bank floods). We anticipate this method will be a valuable tool to extend existing stream monitoring networks and provide new insights on dynamic headwater systems.
- Preprint
(2523 KB) - Metadata XML
-
Supplement
(7108 KB) - BibTeX
- EndNote
Status: open (until 13 May 2025)
-
RC1: 'Comment on egusphere-2025-1186', Anonymous Referee #1, 18 Apr 2025
reply
Review of “A low-cost approach to monitoring streamflow dynamics in small, headwater streams using timelapse imagery and a deep learning model”
In their paper, the authors test a low-cost method for monitoring small headwater catchments using time lapse imagery. A deep learning model is trained on pairs of images evaluated by a person; the evaluator compares apparent differences in streamflow (equal, higher, lower) for each image pair. The trained model is then applied to rank images and produce relative hydrograph.
I have read the paper with interest and find it to be innovative and well written; I think it can be published after moderate revision.
Comments:- Methods – Data Annotation: how are pairs of images chosen from complete set of imagery? How do the authors ensure that training pairs cover the full range of conditions expected for a particular site?
- Figure 6: there appears to be a large spike in predicted model score towards the end of the time series (see inset) that is not matched by a corresponding streamflow observation. Can the authors speculate as to what might be causing this discrepancy? Also, the yellow text in Figure 6 is difficult to read, consider changing color.
- Discussion, lines 356-367: Please define “perfect annotator”, “let” and “right” in this context.
- Discussion, line 426: can you add other examples of how relative flow data might be used? This would help to define the broader impact of this study.
- Conclusions, line 468: throughout the paper, the authors are very careful to clarify that their product is relative streamflow as opposed to discharge (volume per time). It is critical to also be clear about this in the conclusions section and in the title. Otherwise, readers might see “streamflow” and conclude “volumetric flow rate”. I recommend adding “relative” to line 468 as well as to the title of the paper.
- Line 476: please define “left/right” again in this context.
- Can this technique be used to distinguish between the presence/absence of flow in an image? If so, the authors’ method might be useful for determining the intermittency of small headwater streams, a potentially important application.
- I think this paper would be strengthened by adding some discussion about how the method might be modified to transform relative flow to absolute discharge. The authors briefly mention this in the discussion (line 430), but I think this deserves some additional attention.
Citation: https://doi.org/10.5194/egusphere-2025-1186-RC1 -
RC2: 'Comment on egusphere-2025-1186', Anonymous Referee #2, 20 Apr 2025
reply
The manuscript presents a novel and practically valuable method for the measurement of relative streamflow from camera data. This approach stands out for its accessibility to non-expert users and its relevance for applications that consider relative discharge estimations, which the authors convincingly highlight. Furthermore, the usage of the FPE web application is commendable, as it provides an important tool to improve data availability and encourage broader participation in hydrological monitoring.
The focus on relative discharge estimation is both timely and needed. However, I agree with the first reviewer that emphasis on the relative aspect should be further reinforced in the title and conclusion, to better manage reader expectations regarding the method’s scope and to clarify its intended application.
Given the clear focus on methodological development rather than novel scientific findings per se, I suggest the manuscript may be better suited as a Technical Note rather than a Research Article.
A few points require further clarification and elaboration:
- The current assessment of camera stability is limited to qualitative categories. While this is a good first step, more robust, quantitative approaches to evaluating movement—such as automatic image co-registration techniques (e.g., Ljubičić et al., 2021)—are available and should at least be discussed. The current three-category approach may not provide sufficient resolution to draw strong conclusions about the influence of camera motion.
- While the authors emphasize the method’s independence from gauge data, it is used to validate annotator accuracy. This introduces a reliance on gauge data that should be clarified, especially in the context of sites lacking such reference data. The authors should elaborate on how annotator error could be quantified under such conditions, particularly for visually complex or challenging sites.
- Additional details on the dataset and model training process are necessary to ensure reproducibility. For example, how many images fell into the “don’t know” and other categories (i.e., assessing potential imbalance)? Some more specifics in regard of data augmentation, i.e., how many training images were given after augmentation, can be provided. Furthermore, what batch size, learning rate, scheduler type, and number of training epochs was considered? A summary table would be a concise and helpful way to present this information.
- The current explanation of the training process could benefit from additional clarity. I am afraid that I did not fully understand the training approach. Were both images shown simultaneously to the neural network (could also a Siamese network architecture could have been considered)? Was a single model trained across all sites, or were site-specific models developed?
- It would be useful to discuss a bit more how the model might perform under extreme, previously unseen conditions (e.g., large floods). I would assume failure in unseen situations - as also the author’s test-out results in the supplement reveal a lot weaker performance. In the case of, e.g., usage of water segmentation the focus would be on a specific object, which might be easier to re-identify also during extreme events as appearance of water does not change as strongly. A discussion on potential failure modes in such situations compared to more object-specific segmentation methods could be informative.
- The authors touch on some environmental factors like fog and glare. These, along with perspective distortions (e.g., using Gaussian Splatting if multi-view images are available), could be more deeply addressed using advanced augmentation strategies in a future study. This may offer paths toward increasing the robustness of the method under diverse conditions.
Overall, this manuscript introduces an innovative and accessible tool for relative streamflow estimation, with clear real-world utility that can benefit the broader hydrological community. While the methodological foundation is solid, the manuscript could be strengthened by clarifying certain technical aspects. I recommend publication after minor revisions and strongly support reformatting the submission as a Technical Note.
Minor Comments:
Line 120–122: When referencing “nearby” gauges, please specify the distances involved and potential hydrological complexities (e.g., potential tributaries in-between) that might impact the comparability of measurements.
Line 275–279: This section includes some repetition and can be shortened or even removed.
Reference (Please, do not see my listed reference as a request to be added to your references, but solely as a suggestion for more information!):
Ljubičić, R., Strelnikova, D., Perks, M.T., Eltner, A., Peña-Haro, S., Pizarro, A., Dal Sasso, S.F., Scherling, U., Vuono, P., Manfreda, S. (2021): A comparison of tools and techniques for stabilising unmanned aerial system (UAS) imagery for surface flow observations. Hydrology and Earth System Sciences, 25, 5105–5132
Citation: https://doi.org/10.5194/egusphere-2025-1186-RC2
Data sets
Model Predictions, Observations, and Annotation Data for Deep Learning Models Developed To Estimate Relative Flow at 11 Massachusetts Streamflow Sites P. J. Goodling et al. https://doi.org/10.5066/P14LU6CQ
U.S. Geological Survey EcoDrought Stream Discharge, Gage Height and Water Temperature Data in Massachusetts (ver. 2.0, February 2025) J. B. Fair et al. https://doi.org/10.5066/P9ES4RQS
U.S. Geological Survey Flow Photo Explorer U.S. Geological Survey https://www.usgs.gov/apps/ecosheds/fpe
Model code and software
fpe-model v0.9.0 Jeffrey Walker https://github.com/EcoSHEDS/fpe-model
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
244 | 40 | 6 | 290 | 12 | 4 | 4 |
- HTML: 244
- PDF: 40
- XML: 6
- Total: 290
- Supplement: 12
- BibTeX: 4
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1