Transformers, parallel computation, and logarithmic depth
Main:10 Pages
20 Figures
Bibliography:6 Pages
3 Tables
Appendix:42 Pages
Abstract
We show that a constant number of self-attention layers can efficiently simulate, and be simulated by, a constant number of communication rounds of Massively Parallel Computation. As a consequence, we show that logarithmic depth is sufficient for transformers to solve basic computational tasks that cannot be efficiently solved by several other neural sequence models and sub-quadratic transformer approximations. We thus establish parallelism as a key distinguishing property of transformers.
View on arXivComments on this paper
