12

Provable test-time adaptivity and distributional robustness of in-context learning

Main:10 Pages
Bibliography:4 Pages
Appendix:30 Pages
Abstract

We study in-context learning problems where a Transformer is pretrained on tasks drawn from a mixture distribution π=αAλαπα\pi=\sum_{\alpha\in\mathcal{A}} \lambda_{\alpha} \pi_{\alpha}, called the pretraining prior, in which each mixture component πα\pi_{\alpha} is a distribution on tasks of a specific difficulty level indexed by α\alpha. Our goal is to understand the performance of the pretrained Transformer when evaluated on a different test distribution μ\mu, consisting of tasks of fixed difficulty βA\beta\in\mathcal{A}, and with potential distribution shift relative to πβ\pi_\beta, subject to the chi-squared divergence χ2(μ,πβ)\chi^2(\mu,\pi_{\beta}) being at most κ\kappa. In particular, we consider nonparametric regression problems with random smoothness, and multi-index models with random smoothness as well as random effective dimension. We prove that a large Transformer pretrained on sufficient data achieves the optimal rate of convergence corresponding to the difficulty level β\beta, uniformly over test distributions μ\mu in the chi-squared divergence ball. Thus, the pretrained Transformer is able to achieve faster rates of convergence on easier tasks and is robust to distribution shift at test time. Finally, we prove that even if an estimator had access to the test distribution μ\mu, the convergence rate of its expected risk over μ\mu could not be faster than that of our pretrained Transformers, thereby providing a more appropriate optimality guarantee than minimax lower bounds.

View on arXiv
Comments on this paper