22
0

Exact Expressive Power of Transformers with Padding

Abstract

Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there more efficient alternatives to expand a transformer's expressive power without adding parameters? We consider transformers with padding tokens as a form of parallelizable test-time compute. We show that averaging-hard-attention, masked-pre-norm transformers with polynomial padding converge to precisely the class TC0\mathsf{TC}^0 of extremely parallelizable problems. While the TC0\mathsf{TC}^0 upper bound was known, proving a matching lower bound had been elusive. Further, our novel analysis reveals the precise expanded power of padded transformers when coupled with another form of inference-time compute, namely dynamically increasing depth via looping. Our core technical contribution is to show how padding helps bring the notions of complete problems and reductions, which have been a cornerstone of classical complexity theory, to the formal study of transformers. Armed with this new tool, we prove that padded transformers with O(logdn)O(\log^d n) looping on inputs of length nn recognize exactly the class TCd\mathsf{TC}^d of moderately parallelizable problems. Thus, padding and looping together systematically expand transformers' expressive power: with polylogarithmic looping, padded transformers converge to the class NC\mathsf{NC}, the best that could be expected without losing parallelism (unless NC=P\mathsf{NC} = \mathsf{P}). Our results thus motivate further exploration of padding and looping as parallelizable alternatives to chain of thought.

View on arXiv
@article{merrill2025_2505.18948,
  title={ Exact Expressive Power of Transformers with Padding },
  author={ William Merrill and Ashish Sabharwal },
  journal={arXiv preprint arXiv:2505.18948},
  year={ 2025 }
}
Comments on this paper