212

Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence

Main:13 Pages
54 Figures
Bibliography:8 Pages
9 Tables
Appendix:19 Pages
Abstract

Recent advances in depth-recurrent language models show that recurrence can decouple train-time compute and parameter count from test-time compute. In this work, we study how to convert existing pretrained non-recurrent language models into depth-recurrent models. We find that using a curriculum of recurrences to increase the effective depth of the model over the course of training preserves performance while reducing total computational cost. In our experiments, on mathematics, we observe that converting pretrained models to recurrent ones results in better performance at a given compute budget than simply post-training the original non-recurrent language model.

View on arXiv
Comments on this paper