Optimal Control for Transformer Architectures: Enhancing Generalization, Robustness and Efficiency

We study Transformers through the perspective of optimal control theory, using tools from continuous-time formulations to derive actionable insights into training and architecture design. This framework improves the performance of existing Transformer models while providing desirable theoretical guarantees, including generalization and robustness. Our framework is designed to be plug-and-play, enabling seamless integration with established Transformer models and requiring only slight changes to the implementation. We conduct seven extensive experiments on tasks motivated by text generation, sentiment analysis, image classification, and point cloud classification. Experimental results show that the framework improves the test performance of the baselines, while being more parameter-efficient. On character-level text generation with nanoGPT, our framework achieves a 46% reduction in final test loss while using 42% fewer parameters. On GPT-2, our framework achieves a 5.6% reduction in final test loss, demonstrating scalability to larger models. To the best of our knowledge, this is the first work that applies optimal control theory to both the training and architecture of Transformers. It offers a new foundation for systematic, theory-driven improvements and moves beyond costly trial-and-error approaches.
View on arXiv@article{kan2025_2505.13499, title={ Optimal Control for Transformer Architectures: Enhancing Generalization, Robustness and Efficiency }, author={ Kelvin Kan and Xingjian Li and Benjamin J. Zhang and Tuhin Sahai and Stanley Osher and Markos A. Katsoulakis }, journal={arXiv preprint arXiv:2505.13499}, year={ 2025 } }