High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations
- AI4CE

Effective surrogate models are critical for accelerating scientific simulations. Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data, but they often struggle with complex scientific fields exhibiting localized, high-frequency variations. Recent approaches address this by introducing additional features along rigid geometric structures (e.g., grids), but at the cost of flexibility and increased model size. In this paper, we propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR). FA-INR leverages cross-attention to an augmented memory bank to learn flexible feature representations, enabling adaptive allocation of model capacity based on data characteristics, rather than rigid structural assumptions. To further improve scalability, we introduce a coordinate-guided mixture of experts (MoE) that enhances the specialization and efficiency of feature representations. Experiments on three large-scale ensemble simulation datasets show that FA-INR achieves state-of-the-art fidelity while significantly reducing model size, establishing a new trade-off frontier between accuracy and compactness for INR-based surrogates.
View on arXiv@article{li2025_2506.06858, title={ High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations }, author={ Ziwei Li and Yuhan Duan and Tianyu Xiong and Yi-Tang Chen and Wei-Lun Chao and Han-Wei Shen }, journal={arXiv preprint arXiv:2506.06858}, year={ 2025 } }