20 Oct 2023
 | 20 Oct 2023
Status: this preprint is open for discussion.

Representation learning with unconditional denoising diffusion models for dynamical systems

Tobias Sebastian Finn, Lucas Disson, Alban Farchi, Marc Bocquet, and Charlotte Durand

Abstract. We propose denoising diffusion models for data-driven representation learning of dynamical systems. In this type of generative deep learning, a neural network is trained to denoise and reverse a diffusion process, where Gaussian noise is added to states from the attractor of a dynamical system. Iteratively applied, the neural network can then map samples from isotropic Gaussian noise to the state distribution. We showcase the potential of such neural networks in experiments with the Lorenz 63 system. Trained for state generation, the neural network can produce samples, almost indistinguishable from those on the attractor. The model has thereby learned an internal representation of the system, applicable on different tasks than state generation. As a first task, we fine-tune the pre-trained neural network for surrogate modelling by retraining its last layer and keeping the remaining network as a fixed feature extractor. In these low-dimensional settings, such fine-tuned models perform similarly to deep neural networks trained from scratch. As a second task, we apply the pre-trained model to generate an ensemble out of a deterministic run. Diffusing the run, and then iteratively applying the neural network, conditions the state generation, which allows us to sample from the attractor in the run's neighboring region. To control the resulting ensemble spread and Gaussianity, we tune the diffusion time and, thus, the sampled portion of the attractor. While easier to tune, this proposed ensemble sampler can outperform tuned static covariances in ensemble optimal interpolation. Therefore, these two applications show that denoising diffusion models are a promising way towards representation learning for dynamical systems.

Tobias Sebastian Finn et al.

Status: open (until 27 Dec 2023)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse

Tobias Sebastian Finn et al.

Model code and software

cerea-daml/ddm-attractor Tobias Sebastian Finn

Tobias Sebastian Finn et al.


Total article views: 179 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
115 58 6 179 3 3
  • HTML: 115
  • PDF: 58
  • XML: 6
  • Total: 179
  • BibTeX: 3
  • EndNote: 3
Views and downloads (calculated since 20 Oct 2023)
Cumulative views and downloads (calculated since 20 Oct 2023)

Viewed (geographical distribution)

Total article views: 164 (including HTML, PDF, and XML) Thereof 164 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
Latest update: 29 Nov 2023
Short summary
We train neural networks as denoising diffusion models for state generation in the Lorenz 1963 system and demonstrate that they learn an internal representation of the system. We make use of this learned representation and the pre-trained model in two downstream tasks: surrogate modelling and ensemble generation. For both tasks, the diffusion model can outperform other more common approaches. Thus, we see a potential of representation learning with diffusion models for dynamical systems.