Preprints
https://doi.org/10.5194/egusphere-2023-1156
https://doi.org/10.5194/egusphere-2023-1156
05 Jun 2023
 | 05 Jun 2023

On the importance of observation uncertainty when evaluating and comparing models: a hydrological example

Jerom P.M. Aerts, Jannis M. Hoch, Gemma Coxon, Nick C. van de Giesen, and Rolf W. Hut

Abstract. The comparison of models in geosciences involves refining a single model or comparing various model structures. However, such model comparison studies are potentially invalid without considering the uncertainty estimates of observations in evaluating relative model performance. The temporal sampling of the observation and simulation time series is an additional source of uncertainty as a few observation and simulation pairs, in the form of outliers, might have a disproportionate effect on the model skill score. In this study we highlight the importance of including observation uncertainty and temporal sampling uncertainty when comparing or evaluating hydrological models.

In hydrology, large-sample hydrology datasets contain a collection of catchments with hydro-meteorological time series, catchment boundaries and catchment attributes that provide an excellent test-bed for model evaluation and comparison studies. In this study, two model experiments that cover different purposes for model evaluation are set up using 396 catchments from the CAMELS-GB dataset. The first experiment, intra-model, mimics a model refinement case by evaluating the streamflow estimates of the distributed wflow_sbm hydrological model with and without additional calibration. The second experiment, inter-model, is a model comparison based on the streamflow estimates of the distributed PCR-GLOBWB and wflow_sbm hydrological models.

The temporal sampling uncertainty, the result of outliers in observation and simulation pairs, is found to be substantial throughout the case study area. High temporal sampling uncertainty indicates that the model skill scores used to evaluate model performance are heavily influenced by only a few data points in the time series. This is the case for half of the simulations (210) of the first intra-model experiment and 53 catchment simulations of the second inter-model experiment as indicated by larger sampling uncertainty than the difference in the KGE-NP model skill score. These cases highlight the importance of reporting and determining the cause of temporal sampling uncertainty before drawing conclusions on large-sample hydrology based model performance. The streamflow observation uncertainty analysis shows similar results. One third of the catchments simulations (123) of the intra-model experiment contains smaller streamflow simulation differences between models than streamflow observation uncertainties, compared to only 4 catchment simulations of the inter-model experiment due to larger differences between streamflow simulations. These catchments simulations should be excluded before drawing conclusions based on large-samples of catchments. The results of this study demonstrate that it is crucial for benchmark efforts based on large-samples of catchments to include streamflow observation uncertainty and temporal sampling uncertainty to obtain more robust results.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this preprint. The responsibility to include appropriate place names lies with the authors.
Jerom P.M. Aerts, Jannis M. Hoch, Gemma Coxon, Nick C. van de Giesen, and Rolf W. Hut

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on egusphere-2023-1156', Keith Beven, 06 Jun 2023
    • AC1: 'Reply on RC1', Jerom Aerts, 28 Aug 2023
  • RC2: 'Comment on egusphere-2023-1156', Anonymous Referee #2, 17 Jul 2023
    • AC2: 'Reply on RC2', Jerom Aerts, 28 Aug 2023

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on egusphere-2023-1156', Keith Beven, 06 Jun 2023
    • AC1: 'Reply on RC1', Jerom Aerts, 28 Aug 2023
  • RC2: 'Comment on egusphere-2023-1156', Anonymous Referee #2, 17 Jul 2023
    • AC2: 'Reply on RC2', Jerom Aerts, 28 Aug 2023
Jerom P.M. Aerts, Jannis M. Hoch, Gemma Coxon, Nick C. van de Giesen, and Rolf W. Hut
Jerom P.M. Aerts, Jannis M. Hoch, Gemma Coxon, Nick C. van de Giesen, and Rolf W. Hut

Viewed

Total article views: 1,047 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
731 272 44 1,047 35 38
  • HTML: 731
  • PDF: 272
  • XML: 44
  • Total: 1,047
  • BibTeX: 35
  • EndNote: 38
Views and downloads (calculated since 05 Jun 2023)
Cumulative views and downloads (calculated since 05 Jun 2023)

Viewed (geographical distribution)

Total article views: 1,035 (including HTML, PDF, and XML) Thereof 1,035 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 07 Oct 2024
Download
Short summary
Hydrological model performance involves comparing simulated states and fluxes with observed counterparts. Often, it is overlooked that there is inherent uncertainty surrounding the observations. This can significantly impact the results. In this publication, we emphasize the significance of accounting for observation uncertainty in model comparison. We propose a practical method that is applicable for any observational time series with available uncertainty estimations.