7
0

GITO: Graph-Informed Transformer Operator for Learning Complex Partial Differential Equations

Main:7 Pages
6 Figures
Bibliography:2 Pages
6 Tables
Appendix:3 Pages
Abstract

We present a novel graph-informed transformer operator (GITO) architecture for learning complex partial differential equation systems defined on irregular geometries and non-uniform meshes. GITO consists of two main modules: a hybrid graph transformer (HGT) and a transformer neural operator (TNO). HGT leverages a graph neural network (GNN) to encode local spatial relationships and a transformer to capture long-range dependencies. A self-attention fusion layer integrates the outputs of the GNN and transformer to enable more expressive feature learning on graph-structured data. TNO module employs linear-complexity cross-attention and self-attention layers to map encoded input functions to predictions at arbitrary query locations, ensuring discretization invariance and enabling zero-shot super-resolution across any mesh. Empirical results on benchmark PDE tasks demonstrate that GITO outperforms existing transformer-based neural operators, paving the way for efficient, mesh-agnostic surrogate solvers in engineering applications.

View on arXiv
@article{ramezankhani2025_2506.13906,
  title={ GITO: Graph-Informed Transformer Operator for Learning Complex Partial Differential Equations },
  author={ Milad Ramezankhani and Janak M. Patel and Anirudh Deodhar and Dagnachew Birru },
  journal={arXiv preprint arXiv:2506.13906},
  year={ 2025 }
}
Comments on this paper