12
0

Transformers Learn Faster with Semantic Focus

Main:19 Pages
38 Figures
Bibliography:6 Pages
6 Tables
Appendix:36 Pages
Abstract

Various forms of sparse attention have been explored to mitigate the quadratic computational and memory cost of the attention mechanism in transformers. We study sparse transformers not through a lens of efficiency but rather in terms of learnability and generalization. Empirically studying a range of attention mechanisms, we find that input-dependent sparse attention models appear to converge faster and generalize better than standard attention models, while input-agnostic sparse attention models show no such benefits -- a phenomenon that is robust across architectural and optimization hyperparameter choices. This can be interpreted as demonstrating that concentrating a model's "semantic focus" with respect to the tokens currently being considered (in the form of input-dependent sparse attention) accelerates learning. We develop a theoretical characterization of the conditions that explain this behavior. We establish a connection between the stability of the standard softmax and the loss function's Lipschitz properties, then show how sparsity affects the stability of the softmax and the subsequent convergence and generalization guarantees resulting from the attention mechanism. This allows us to theoretically establish that input-agnostic sparse attention does not provide any benefits. We also characterize conditions when semantic focus (input-dependent sparse attention) can provide improved guarantees, and we validate that these conditions are in fact met in our empirical evaluations.

View on arXiv
@article{ram2025_2506.14095,
  title={ Transformers Learn Faster with Semantic Focus },
  author={ Parikshit Ram and Kenneth L. Clarkson and Tim Klinger and Shashanka Ubaru and Alexander G. Gray },
  journal={arXiv preprint arXiv:2506.14095},
  year={ 2025 }
}
Comments on this paper