103
43

Length Generalization in Arithmetic Transformers

Abstract

We examine how transformers cope with two challenges: learning basic integer arithmetic, and generalizing to longer sequences than seen during training. We find that relative position embeddings enable length generalization for simple tasks, such as addition: models trained on 55-digit numbers can perform 1515-digit sums. However, this method fails for multiplication, and we propose train set priming: adding a few (1010 to 5050) long sequences to the training set. We show that priming allows models trained on 55-digit ×\times 33-digit multiplications to generalize to 35×335\times 3 examples. We also show that models can be primed for different generalization lengths, and that the priming sample size scales as the logarithm of the training set size. Finally, we discuss potential applications of priming beyond arithmetic.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.