100
6

Theoretical limitations of multi-layer Transformer

Abstract

Transformers, especially the decoder-only variants, are the backbone of most modern large language models; yet we do not have much understanding of their expressive power except for the simple 11-layer case. Due to the difficulty of analyzing multi-layer models, all previous work relies on unproven complexity conjectures to show limitations for multi-layer Transformers. In this work, we prove the first unconditional\textit{unconditional} lower bound against multi-layer decoder-only transformers. For any constant LL, we prove that any LL-layer decoder-only transformer needs a polynomial model dimension (nΩ(1)n^{\Omega(1)}) to perform sequential composition of LL functions over an input of nn tokens. As a consequence, our results give: (1) the first depth-width trade-off for multi-layer transformers, exhibiting that the LL-step composition task is exponentially harder for LL-layer models compared to (L+1)(L+1)-layer ones; (2) an unconditional separation between encoder and decoder, exhibiting a hard task for decoders that can be solved by an exponentially shallower and smaller encoder; (3) a provable advantage of chain-of-thought, exhibiting a task that becomes exponentially easier with chain-of-thought. On the technical side, we propose the multi-party autoregressive\textit{autoregressive} communication\textit{communication} model\textit{model} that captures the computation of a decoder-only Transformer. We also introduce a new proof technique that finds a certain indistinguishable\textit{indistinguishable} decomposition\textit{decomposition} of all possible inputs iteratively for proving lower bounds in this model. We believe our new communication model and proof technique will be helpful to further understand the computational power of transformers.

View on arXiv
Comments on this paper