97
0

Scale-invariant Attention

Main:9 Pages
9 Figures
Bibliography:3 Pages
2 Tables
Appendix:9 Pages
Abstract

One persistent challenge in LLM research is the development of attention mechanisms that are able to generalise from training on shorter contexts to inference on longer contexts. We propose two conditions that we expect all effective long context attention mechanisms to have: scale-invariant total attention, and scale-invariant attention sparsity. Under a Gaussian assumption, we show that a simple position-dependent transformation of the attention logits is sufficient for these conditions to hold. Experimentally we find that the resulting scale-invariant attention scheme gives considerable benefits in terms of validation loss when zero-shot generalising from training on short contexts to validation on longer contexts, and is effective at long-context retrieval.

View on arXiv
@article{anson2025_2505.17083,
  title={ Scale-invariant Attention },
  author={ Ben Anson and Xi Wang and Laurence Aitchison },
  journal={arXiv preprint arXiv:2505.17083},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.