5
0

Knee-Deep in C-RASP: A Transformer Depth Hierarchy

Main:10 Pages
5 Figures
Bibliography:2 Pages
Appendix:15 Pages
Abstract

It has been observed that transformers with greater depth (that is, more layers) have more capabilities, but can we establish formally which capabilities are gained with greater depth? We answer this question with a theoretical proof followed by an empirical study. First, we consider transformers that round to fixed precision except inside attention. We show that this subclass of transformers is expressively equivalent to the programming language C-RASP and this equivalence preserves depth. Second, we prove that deeper C-RASP programs are more expressive than shallower C-RASP programs, implying that deeper transformers are more expressive than shallower transformers (within the subclass mentioned above). These results are established by studying a form of temporal logic with counting operators, which was shown equivalent to C-RASP in previous work. Finally, we provide empirical evidence that our theory predicts the depth required for transformers without positional encodings to length-generalize on a family of sequential dependency tasks.

View on arXiv
@article{yang2025_2506.16055,
  title={ Knee-Deep in C-RASP: A Transformer Depth Hierarchy },
  author={ Andy Yang and Michaël Cadilhac and David Chiang },
  journal={arXiv preprint arXiv:2506.16055},
  year={ 2025 }
}
Comments on this paper