v1v2 (latest)
Reservoir Transformers
Annual Meeting of the Association for Computational Linguistics (ACL), 2020
Abstract
We demonstrate that transformers obtain impressive performance even when some of the layers are randomly initialized and never updated. Inspired by old and well-established ideas in machine learning, we explore a variety of non-linear "reservoir" layers interspersed with regular transformer layers, and show improvements in wall-clock compute time until convergence, as well as overall performance, on various machine translation and (masked) language modelling tasks.
View on arXivComments on this paper
