33
0

Continuous-Time Attention: PDE-Guided Mechanisms for Long-Sequence Transformers

Main:7 Pages
16 Figures
Bibliography:3 Pages
13 Tables
Appendix:11 Pages
Abstract

We propose a novel framework, Continuous_Time Attention, which infuses partial differential equations (PDEs) into the Transformer's attention mechanism to address the challenges of extremely long input sequences. Instead of relying solely on a static attention matrix, we allow attention weights to evolve over a pseudo_time dimension via diffusion, wave, or reaction_diffusion dynamics. This mechanism systematically smooths local noise, enhances long_range dependencies, and stabilizes gradient flow. Theoretically, our analysis shows that PDE_based attention leads to better optimization landscapes and polynomial rather than exponential decay of distant interactions. Empirically, we benchmark our method on diverse experiments_demonstrating consistent gains over both standard and specialized long sequence Transformer variants. Our findings highlight the potential of PDE_based formulations to enrich attention mechanisms with continuous_time dynamics and global coherence.

View on arXiv
@article{zhang2025_2505.20666,
  title={ Continuous-Time Attention: PDE-Guided Mechanisms for Long-Sequence Transformers },
  author={ Yukun Zhang and Xueqing Zhou },
  journal={arXiv preprint arXiv:2505.20666},
  year={ 2025 }
}
Comments on this paper