New Students on Sesame Street: What Order-Aware Matrix Embeddings Can
Learn from BERT
Main:8 Pages
2 Figures
Bibliography:3 Pages
Appendix:8 Pages
Abstract
Large-scale pretrained language models (PreLMs) are revolutionizing natural language processing across all benchmarks. However, their sheer size is prohibitive in low-resource or large-scale applications. While common approaches reduce the size of PreLMs via same-architecture distillation or pruning, we explore distilling PreLMs into more efficient order-aware embedding models. Our results on the GLUE benchmark show that embedding-centric students, which have learned from BERT, yield scores comparable to DistilBERT on QQP and RTE, often match or exceed the scores of ELMo, and only fall behind on detecting linguistic acceptability.
View on arXivComments on this paper
