Next-token prediction with the logarithmic loss is a cornerstone of autoregressive sequence modeling, but, in practice, suffers from error amplification, where errors in the model compound and generation quality degrades as sequence length increases. From a theoretical perspective, this phenomenon should not appear in well-specified settings, and, indeed, a growing body of empirical work hypothesizes that misspecification, where the learner is not sufficiently expressive to represent the target distribution, may be the root cause. Under misspecification -- where the goal is to learn as well as the best-in-class model up to a multiplicative approximation factor -- we confirm that indeed grows with for next-token prediction, lending theoretical support to this empirical hypothesis. We then ask whether this mode of error amplification is avoidable algorithmically, computationally, or information-theoretically, and uncover inherent computational-statistical tradeoffs. We show:(1) Information-theoretically, one can avoid error amplification and achieve .(2) Next-token prediction can be made robust so as to achieve , representing moderate error amplification, but this is an inherent barrier: any next-token prediction-style objective must suffer .(3) For the natural testbed of autoregressive linear models, no computationally efficient algorithm can achieve sub-polynomial approximation factor ; however, at least for binary token spaces, one can smoothly trade compute for statistical power and improve on in sub-exponential time.Our results have consequences in the more general setting of imitation learning, where the widely-used behavior cloning algorithm generalizes next-token prediction.
View on arXiv@article{rohatgi2025_2502.12465, title={ Computational-Statistical Tradeoffs at the Next-Token Prediction Barrier: Autoregressive and Imitation Learning under Misspecification }, author={ Dhruv Rohatgi and Adam Block and Audrey Huang and Akshay Krishnamurthy and Dylan J. Foster }, journal={arXiv preprint arXiv:2502.12465}, year={ 2025 } }