Exact Expressive Power of Transformers with Padding

Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there more efficient alternatives to expand a transformer's expressive power without adding parameters? We consider transformers with padding tokens as a form of parallelizable test-time compute. We show that averaging-hard-attention, masked-pre-norm transformers with polynomial padding converge to precisely the class of extremely parallelizable problems. While the upper bound was known, proving a matching lower bound had been elusive. Further, our novel analysis reveals the precise expanded power of padded transformers when coupled with another form of inference-time compute, namely dynamically increasing depth via looping. Our core technical contribution is to show how padding helps bring the notions of complete problems and reductions, which have been a cornerstone of classical complexity theory, to the formal study of transformers. Armed with this new tool, we prove that padded transformers with looping on inputs of length recognize exactly the class of moderately parallelizable problems. Thus, padding and looping together systematically expand transformers' expressive power: with polylogarithmic looping, padded transformers converge to the class , the best that could be expected without losing parallelism (unless ). Our results thus motivate further exploration of padding and looping as parallelizable alternatives to chain of thought.
View on arXiv@article{merrill2025_2505.18948, title={ Exact Expressive Power of Transformers with Padding }, author={ William Merrill and Ashish Sabharwal }, journal={arXiv preprint arXiv:2505.18948}, year={ 2025 } }