51
0

Understanding In-context Learning of Addition via Activation Subspaces

Abstract

To perform in-context learning, language models must extract signals from individual few-shot examples, aggregate these into a learned prediction rule, and then apply this rule to new examples. How is this implemented in the forward pass of modern transformer models? To study this, we consider a structured family of few-shot learning tasks for which the true prediction rule is to add an integer kk to the input. We find that Llama-3-8B attains high accuracy on this task for a range of kk, and localize its few-shot ability to just three attention heads via a novel optimization approach. We further show the extracted signals lie in a six-dimensional subspace, where four of the dimensions track the unit digit and the other two dimensions track overall magnitude. We finally examine how these heads extract information from individual few-shot examples, identifying a self-correction mechanism in which mistakes from earlier examples are suppressed by later examples. Our results demonstrate how tracking low-dimensional subspaces across a forward pass can provide insight into fine-grained computational structures.

View on arXiv
@article{hu2025_2505.05145,
  title={ Understanding In-context Learning of Addition via Activation Subspaces },
  author={ Xinyan Hu and Kayo Yin and Michael I. Jordan and Jacob Steinhardt and Lijie Chen },
  journal={arXiv preprint arXiv:2505.05145},
  year={ 2025 }
}
Comments on this paper