Preprints
https://doi.org/10.5194/egusphere-2025-1706
https://doi.org/10.5194/egusphere-2025-1706
25 Apr 2025
 | 25 Apr 2025

From RNNs to Transformers: benchmarking deep learning architectures for hydrologic prediction

Jiangtao Liu, Chaopeng Shen, Fearghal O'Donncha, Yalan Song, Wei Zhi, Hylke E. Beck, Tadd Bindas, Nicholas Kraabel, and Kathryn Lawson

Abstract. Recurrent Neural Networks (RNNs) such as Long Short-Term Memory (LSTM) have achieved significant success in hydrological modeling. However, the recent successes of foundation models like ChatGPT and Segment Anything Model (SAM) in natural language processing and computer vision have raised curiosity about the potential of Attention mechanism-based models in the hydrologic domain. In this study, we propose a deep learning framework that seamlessly integrates multi-source, multi-scale data and, multi-model modules, providing a flexible automated platform for multi-dataset benchmarking and attention-based model comparisons beyond LSTM-centered tasks. Furthermore, we evaluate pretrained Large Language Models (LLMs) and Time Series Attention-based Models (TSAMs) in terms of their forecasting capabilities in data sparse regions. This general framework can be applied to regression tasks, autoregression tasks, and zero-shot forecasting tasks (i.e., tasks without prior training data). We evaluated 11 different Transformer models under different scenarios in comparison to benchmark models, particularly LSTM, using datasets for runoff, soil moisture, snow water equivalent, and dissolved oxygen on global and regional scales. Results show that LSTM models perform the best in memory-dependent regression tasks, especially on the global streamflow dataset. However, as tasks become complex (from regression and data integration to autoregression and zero-shot prediction), attention-based models gradually surpass LSTM models. This study provides a robust framework for comparing and developing different model structures in the era of large-scale models, providing a valuable reference and benchmark for water resource modeling, forecasting and management.

Competing interests: Kathryn Lawson and Chaopeng Shen have financial interests in HydroSapient, Inc., a company which could potentially benefit from the results of this research. This interest has been reviewed by The Pennsylvania State University in accordance with its individual conflict of interest policy for the purpose of maintaining the objectivity and the integrity of research.

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.
Share
Jiangtao Liu, Chaopeng Shen, Fearghal O'Donncha, Yalan Song, Wei Zhi, Hylke E. Beck, Tadd Bindas, Nicholas Kraabel, and Kathryn Lawson

Status: final response (author comments only)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on egusphere-2025-1706', Anonymous Referee #1, 12 Jun 2025
    • AC1: 'Reply on RC1', Jiangtao Liu, 23 Jul 2025
  • RC2: 'Comment on egusphere-2025-1706', Anonymous Referee #2, 24 Jun 2025
    • AC2: 'Reply on RC2', Jiangtao Liu, 23 Jul 2025
Jiangtao Liu, Chaopeng Shen, Fearghal O'Donncha, Yalan Song, Wei Zhi, Hylke E. Beck, Tadd Bindas, Nicholas Kraabel, and Kathryn Lawson
Jiangtao Liu, Chaopeng Shen, Fearghal O'Donncha, Yalan Song, Wei Zhi, Hylke E. Beck, Tadd Bindas, Nicholas Kraabel, and Kathryn Lawson

Viewed

Total article views: 1,049 (including HTML, PDF, and XML)
HTML PDF XML Total Supplement BibTeX EndNote
917 117 15 1,049 42 22 26
  • HTML: 917
  • PDF: 117
  • XML: 15
  • Total: 1,049
  • Supplement: 42
  • BibTeX: 22
  • EndNote: 26
Views and downloads (calculated since 25 Apr 2025)
Cumulative views and downloads (calculated since 25 Apr 2025)

Viewed (geographical distribution)

Total article views: 1,056 (including HTML, PDF, and XML) Thereof 1,056 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 18 Sep 2025
Download
Short summary
Using global and regional datasets, we compared attention-based models and Long Short-Term Memory (LSTM) models to predict hydrologic variables. Our results show LSTM models perform better in simpler tasks, whereas attention-based models perform better in complex scenarios, offering insights for improved water resource management.
Share