2
0

Redefining Neural Operators in d+1d+1 Dimensions

Abstract

Neural Operators have emerged as powerful tools for learning mappings between function spaces. Among them, the kernel integral operator has been widely validated on universally approximating various operators. Although recent advancements following this definition have developed effective modules to better approximate the kernel function defined on the original domain (with dd dimensions, d=1,2,3...d=1, 2, 3...), the unclarified evolving mechanism in the embedding spaces blocks our view to design neural operators that can fully capture the target system evolution.Drawing on recent breakthroughs in quantum simulation of partial differential equations (PDEs), we elucidate the linear evolution process in neural operators. Based on that, we redefine neural operators on a new d+1d+1 dimensional domain. Within this framework, we implement our proposed Schrödingerised Kernel Neural Operator (SKNO) aligning better with the d+1d+1 dimensional evolution. In experiments, our d+1d+1 dimensional evolving linear block performs far better than others. Also, we test SKNO's SOTA performance on various benchmark tests and also the zero-shot super-resolution task. In addition, we analyse the impact of different lifting and recovering operators on the prediction within the redefined NO framework, reflecting the alignment between our model and the underlying d+1d+1 dimensional evolution.

View on arXiv
@article{song2025_2505.11766,
  title={ Redefining Neural Operators in $d+1$ Dimensions },
  author={ Haoze Song and Zhihao Li and Xiaobo Zhang and Zecheng Gan and Zhilu Lai and Wei Wang },
  journal={arXiv preprint arXiv:2505.11766},
  year={ 2025 }
}
Comments on this paper