Preprints
https://doi.org/10.5194/egusphere-2024-1415
https://doi.org/10.5194/egusphere-2024-1415
22 May 2024
 | 22 May 2024

Assessing the storm surge model performance: What error indicators can measure the skill?

Rodrigo Campos-Caba, Lorenzo Mentaschi, Jacopo Alessandri, Paula Camus, Andrea Mazzino, Franceso Ferrari, Ivan Federico, Michalis Vousdoukas, and Massimo Tondello

Abstract. A well-validated storm surge numerical model is crucial, offering precise coastal hazard information and serving as a basis for extensive databases and advanced data-driven algorithms. However, selecting the best model setup based solely on common error indicators like RMSE or Pearson correlation doesn't always yield optimal results. To illustrate this, we conducted 34-year high-resolution simulations for storm surge under barotropic (BT) and baroclinic (BC) configurations, using atmospheric data from ERA5 and a high-resolution downscaling of the Climate Forecast System Reanalysis (CFSR) developed by the University of Genoa (UniGe). We combined forcing and configurations to produce three datasets: 1) BT-ERA5, 2) BC-ERA5, and 3) BC-UniGe. The model performance was assessed against nearshore station data using various statistical metrics. While RMSE and Pearson correlation suggest BT-ERA5, i.e. the coarsest and simplest setup, as the best model, followed by BC-ERA5, we demonstrate that these indicators aren't always reliable for performance assessment. The most sophisticated model BC-UniGe shows worse values of RMSE or Pearson correlation due to the so-called “double penalty” effect. Here we propose new skill indicators that assess the ability of the model to reproduce the distribution of the observations. This, combined with an analysis of values above the 99th percentile, identifies BC-UniGe as the best model, while ERA5 simulations tend to underestimate the extremes. Although the study focuses on the accurate representation of storm surge by the numerical model, the analysis and proposed metrics can be applied to any problem involving the comparison between time series of simulation and observation.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this preprint. The responsibility to include appropriate place names lies with the authors.
Share

Journal article(s) based on this preprint

25 Nov 2024
Assessing storm surge model performance: what error indicators can measure the model's skill?
Rodrigo Campos-Caba, Jacopo Alessandri, Paula Camus, Andrea Mazzino, Francesco Ferrari, Ivan Federico, Michalis Vousdoukas, Massimo Tondello, and Lorenzo Mentaschi
Ocean Sci., 20, 1513–1526, https://doi.org/10.5194/os-20-1513-2024,https://doi.org/10.5194/os-20-1513-2024, 2024
Short summary
Download

The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.

Short summary
Development of high-resolution simulations of storm surge in the Northern Adriatic Sea,...
Share