We consider the problem of learning the evolution operator for the time-dependent Schrödinger equation, where the Hamiltonian may vary with time. Existing neural network-based surrogates often ignore fundamental properties of the Schrödinger equation, such as linearity and unitarity, and lack theoretical guarantees on prediction error or time generalization. To address this, we introduce a linear estimator for the evolution operator that preserves a weak form of unitarity. We establish both upper and lower bounds on the prediction error that hold uniformly over all sufficiently smooth initial wave functions. Additionally, we derive time generalization bounds that quantify how the estimator extrapolates beyond the time points seen during training. Experiments across real-world Hamiltonians -- including hydrogen atoms, ion traps for qubit design, and optical lattices -- show that our estimator achieves relative errors to times smaller than state-of-the-art methods such as the Fourier Neural Operator and DeepONet.
View on arXiv@article{patel2025_2505.18288, title={ Operator Learning for Schrödinger Equation: Unitarity, Error Bounds, and Time Generalization }, author={ Yash Patel and Unique Subedi and Ambuj Tewari }, journal={arXiv preprint arXiv:2505.18288}, year={ 2025 } }