ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12940
7
0

Multi-Level Monte Carlo Training of Neural Operators

19 May 2025
James Rowbottom
Stefania Fresca
Pietro Lio
Carola-Bibiane Schönlieb
Nicolas Boullé
    AI4CE
ArXivPDFHTML
Abstract

Operator learning is a rapidly growing field that aims to approximate nonlinear operators related to partial differential equations (PDEs) using neural operators. These rely on discretization of input and output functions and are, usually, expensive to train for large-scale problems at high-resolution. Motivated by this, we present a Multi-Level Monte Carlo (MLMC) approach to train neural operators by leveraging a hierarchy of resolutions of function dicretization. Our framework relies on using gradient corrections from fewer samples of fine-resolution data to decrease the computational cost of training while maintaining a high level accuracy. The proposed MLMC training procedure can be applied to any architecture accepting multi-resolution data. Our numerical experiments on a range of state-of-the-art models and test-cases demonstrate improved computational efficiency compared to traditional single-resolution training approaches, and highlight the existence of a Pareto curve between accuracy and computational time, related to the number of samples per resolution.

View on arXiv
@article{rowbottom2025_2505.12940,
  title={ Multi-Level Monte Carlo Training of Neural Operators },
  author={ James Rowbottom and Stefania Fresca and Pietro Lio and Carola-Bibiane Schönlieb and Nicolas Boullé },
  journal={arXiv preprint arXiv:2505.12940},
  year={ 2025 }
}
Comments on this paper