the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
From RNNs to Transformers: benchmarking deep learning architectures for hydrologic prediction
Abstract. Recurrent Neural Networks (RNNs) such as Long Short-Term Memory (LSTM) have achieved significant success in hydrological modeling. However, the recent successes of foundation models like ChatGPT and Segment Anything Model (SAM) in natural language processing and computer vision have raised curiosity about the potential of Attention mechanism-based models in the hydrologic domain. In this study, we propose a deep learning framework that seamlessly integrates multi-source, multi-scale data and, multi-model modules, providing a flexible automated platform for multi-dataset benchmarking and attention-based model comparisons beyond LSTM-centered tasks. Furthermore, we evaluate pretrained Large Language Models (LLMs) and Time Series Attention-based Models (TSAMs) in terms of their forecasting capabilities in data sparse regions. This general framework can be applied to regression tasks, autoregression tasks, and zero-shot forecasting tasks (i.e., tasks without prior training data). We evaluated 11 different Transformer models under different scenarios in comparison to benchmark models, particularly LSTM, using datasets for runoff, soil moisture, snow water equivalent, and dissolved oxygen on global and regional scales. Results show that LSTM models perform the best in memory-dependent regression tasks, especially on the global streamflow dataset. However, as tasks become complex (from regression and data integration to autoregression and zero-shot prediction), attention-based models gradually surpass LSTM models. This study provides a robust framework for comparing and developing different model structures in the era of large-scale models, providing a valuable reference and benchmark for water resource modeling, forecasting and management.
Competing interests: Kathryn Lawson and Chaopeng Shen have financial interests in HydroSapient, Inc., a company which could potentially benefit from the results of this research. This interest has been reviewed by The Pennsylvania State University in accordance with its individual conflict of interest policy for the purpose of maintaining the objectivity and the integrity of research.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.- Preprint
(2683 KB) - Metadata XML
-
Supplement
(1308 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on egusphere-2025-1706', Anonymous Referee #1, 12 Jun 2025
- AC1: 'Reply on RC1', Jiangtao Liu, 23 Jul 2025
-
RC2: 'Comment on egusphere-2025-1706', Anonymous Referee #2, 24 Jun 2025
The authors evaluate different transformer models under different scenarios (i.e., regression, data integration, and autoregression) in comparison with benchmark models, LSTM networks and DLinear, using various hydrologic datasets. In addition, they compare the performance of LSTM networks with pre-trained LLM in the zero-shot forecasting for autoregression tasks. They show that LSTM networks outperform transformers in regression and data integration tasks, and attention-based methods surpass LSTM networks in autoregression and zero-shot forecasting. The paper is well written, with deep discussion. I have minor comments below.
For boarder audiences with minor ML/DL backgrounds, it would be helpful to provide brief introductions of the DL models used in the manuscript. The information can be provided in the SI if there are limited spaces in the main text. Table 1 provides the main features of different variants of transformers. Since ML/DL has many unique terms, simply providing the names of features does not really help understand their differences. Maybe the authors can adapt the table with more general features, such as “Trained on time series”.
In Section 2.3-attention models, it is better to mention that the authors use pre-trained LLM for zero-shot forecasting. This information come out in Section 2.4.5. In addition, for zero-shot forecasting, the authors only compare DL model performance in autoregression tasks, and state that LSTM underperforms pre-trained LLMs in zero-shot forecasting. How about regression and data integration tasks? I would expect that the LLMs cannot really understand the relationship between input and output without fine-tuning.
Specific comments:
Some figures for equations are burr, like Equation 5.
Equation 6: stating that r is Pearson’s correlation coefficient.
Equation 8: I think the equation is wrong. It should be (RMSE^2-Bias^2)^0.5.
Citation: https://doi.org/10.5194/egusphere-2025-1706-RC2 - AC2: 'Reply on RC2', Jiangtao Liu, 23 Jul 2025
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
917 | 117 | 15 | 1,049 | 42 | 22 | 26 |
- HTML: 917
- PDF: 117
- XML: 15
- Total: 1,049
- Supplement: 42
- BibTeX: 22
- EndNote: 26
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
The manuscript is of good quality and of high relevance. However, the methods are not yet described in sufficient detail to finally judge the value of the results. In the supplementary document, I provide more details of the points that I am missing in the method section and other points that should be addressed.