64
0

Recurrent Neural Operators: Stable Long-Term PDE Prediction

Main:9 Pages
7 Figures
Bibliography:2 Pages
5 Tables
Appendix:7 Pages
Abstract

Neural operators have emerged as powerful tools for learning solution operators of partial differential equations. However, in time-dependent problems, standard training strategies such as teacher forcing introduce a mismatch between training and inference, leading to compounding errors in long-term autoregressive predictions. To address this issue, we propose Recurrent Neural Operators (RNOs)-a novel framework that integrates recurrent training into neural operator architectures. Instead of conditioning each training step on ground-truth inputs, RNOs recursively apply the operator to their own predictions over a temporal window, effectively simulating inference-time dynamics during training. This alignment mitigates exposure bias and enhances robustness to error accumulation. Theoretically, we show that recurrent training can reduce the worst-case exponential error growth typical of teacher forcing to linear growth. Empirically, we demonstrate that recurrently trained Multigrid Neural Operators significantly outperform their teacher-forced counterparts in long-term accuracy and stability on standard benchmarks. Our results underscore the importance of aligning training with inference dynamics for robust temporal generalization in neural operator learning.

View on arXiv
@article{ye2025_2505.20721,
  title={ Recurrent Neural Operators: Stable Long-Term PDE Prediction },
  author={ Zaijun Ye and Chen-Song Zhang and Wansheng Wang },
  journal={arXiv preprint arXiv:2505.20721},
  year={ 2025 }
}
Comments on this paper