Preprints
https://doi.org/10.48550/arXiv.2512.03653
https://doi.org/10.48550/arXiv.2512.03653
23 Feb 2026
 | 23 Feb 2026
Status: this preprint is open for discussion and under review for Nonlinear Processes in Geophysics (NPG).

Conditional updates of neural network weights for increased out of training performance

Jan Saynisch-Wagner and Saran Rajendran Sari

Abstract. This study proposes a method to enhance neural network performance when training data and application data are not very similar, e.g., out of distribution problems, as well as pattern and regime shifts. The method consists of three main steps: 1) Retrain the neural network towards reasonable subsets of the training data set and note down the resulting weight anomalies. 2) Choose reasonable predictors and derive a regression between the predictors and the weight anomalies. 3) Extrapolate the weights, and thereby the neural network, to the application data. We show and discuss this method in three nonlinear use cases from the climate sciences, which include successful temporal, spatial and cross-domain extrapolations of neural networks.

Share
Jan Saynisch-Wagner and Saran Rajendran Sari

Status: open (until 20 Apr 2026)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
Jan Saynisch-Wagner and Saran Rajendran Sari
Jan Saynisch-Wagner and Saran Rajendran Sari

Viewed

Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.

Total article views: 36 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
36 0 0 36 0 0
  • HTML: 36
  • PDF: 0
  • XML: 0
  • Total: 36
  • BibTeX: 0
  • EndNote: 0
Views and downloads (calculated since 23 Feb 2026)
Cumulative views and downloads (calculated since 23 Feb 2026)

Viewed (geographical distribution)

Since the preprint corresponding to this journal article was posted outside of Copernicus Publications, the preprint-related metrics are limited to HTML views.

Total article views: 36 (including HTML, PDF, and XML) Thereof 36 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 28 Feb 2026
Download
Short summary
Neural networks are limited in situations which differ from the learned conditions. We propose a solution to this out of distribution problem. We derive anomalies of trained neural networks internal parameters by retraining on subsets of the same training data. Then we relate the network-parameter sensitivities to differences in the training data subsets that caused them. Finally, we extrapolate the found relations to generate networks that perform better outside the training distribution.
Share