12
0

Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective

Abstract

This work aims to demystify the out-of-distribution (OOD) capabilities of in-context learning (ICL) by studying linear regression tasks parameterized with low-rank covariance matrices. With such a parameterization, we can model distribution shifts as a varying angle between the subspace of the training and testing covariance matrices. We prove that a single-layer linear attention model incurs a test risk with a non-negligible dependence on the angle, illustrating that ICL is not robust to such distribution shifts. However, using this framework, we also prove an interesting property of ICL: when trained on task vectors drawn from a union of low-dimensional subspaces, ICL can generalize to any subspace within their span, given sufficiently long prompt lengths. This suggests that the OOD generalization ability of Transformers may actually stem from the new task lying within the span of those encountered during training. We empirically show that our results also hold for models such as GPT-2, and conclude with (i) experiments on how our observations extend to nonlinear function classes and (ii) results on how LoRA has the ability to capture distribution shifts.

View on arXiv
@article{kwon2025_2505.14808,
  title={ Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective },
  author={ Soo Min Kwon and Alec S. Xu and Can Yaras and Laura Balzano and Qing Qu },
  journal={arXiv preprint arXiv:2505.14808},
  year={ 2025 }
}
Comments on this paper