70
0
v1v2 (latest)

Data-assimilated model-informed reinforcement learning

Main:27 Pages
18 Figures
Bibliography:4 Pages
3 Tables
Abstract

The control of spatio-temporally chaos is challenging because of high dimensionality and unpredictability. Model-free reinforcement learning (RL) discovers optimal control policies by interacting with the system, typically requiring observations of the full physical state. In practice, sensors often provide only partial and noisy measurements (observations) of the system. The objective of this paper is to develop a framework that enables the control of chaotic systems with partial and noisy observability. The proposed method, data-assimilated model-informed reinforcement learning (DA-MIRL), integrates (i) low-order models to approximate high-dimensional dynamics; (ii) sequential data assimilation to correct the model prediction when observations become available; and (iii) an off-policy actor-critic RL algorithm to adaptively learn an optimal control strategy based on the corrected state estimates. We test DA-MIRL on the spatiotemporally chaotic solutions of the Kuramoto-Sivashinsky equation. We estimate the full state of the environment with (i) a physics-based model, here, a coarse-grained model; and (ii) a data-driven model, here, the control-aware echo state network, which is proposed in this paper. We show that DA-MIRL successfully estimates and suppresses the chaotic dynamics of the environment in real time from partial observations and approximate models. This work opens opportunities for the control of partially observable chaotic systems.

View on arXiv
@article{ozan2025_2506.01755,
  title={ Data-assimilated model-informed reinforcement learning },
  author={ Defne E. Ozan and Andrea Nóvoa and Georgios Rigas and Luca Magri },
  journal={arXiv preprint arXiv:2506.01755},
  year={ 2025 }
}
Comments on this paper