ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06858
32
0

FreqMoE: Dynamic Frequency Enhancement for Neural PDE Solvers

11 May 2025
Tianyu Chen
Haoyi Zhou
Y. Li
Hao Wang
Z. Zhang
Tianchen Zhu
Shanghang Zhang
J. Li
ArXivPDFHTML
Abstract

Fourier Neural Operators (FNO) have emerged as promising solutions for efficiently solving partial differential equations (PDEs) by learning infinite-dimensional function mappings through frequency domain transformations. However, the sparsity of high-frequency signals limits computational efficiency for high-dimensional inputs, and fixed-pattern truncation often causes high-frequency signal loss, reducing performance in scenarios such as high-resolution inputs or long-term predictions. To address these challenges, we propose FreqMoE, an efficient and progressive training framework that exploits the dependency of high-frequency signals on low-frequency components. The model first learns low-frequency weights and then applies a sparse upward-cycling strategy to construct a mixture of experts (MoE) in the frequency domain, effectively extending the learned weights to high-frequency regions. Experiments on both regular and irregular grid PDEs demonstrate that FreqMoE achieves up to 16.6% accuracy improvement while using merely 2.1% parameters (47.32x reduction) compared to dense FNO. Furthermore, the approach demonstrates remarkable stability in long-term predictions and generalizes seamlessly to various FNO variants and grid structures, establishing a new ``Low frequency Pretraining, High frequency Fine-tuning'' paradigm for solving PDEs.

View on arXiv
@article{chen2025_2505.06858,
  title={ FreqMoE: Dynamic Frequency Enhancement for Neural PDE Solvers },
  author={ Tianyu Chen and Haoyi Zhou and Ying Li and Hao Wang and Zhenzhe Zhang and Tianchen Zhu and Shanghang Zhang and Jianxin Li },
  journal={arXiv preprint arXiv:2505.06858},
  year={ 2025 }
}
Comments on this paper