77
0

Scaling Embedding Layers in Language Models

Abstract

We propose SCONE (SScalable, CContextualized, OOffloaded, NN-gram EEmbedding), a new method for extending input embedding layers to enhance language model performance. To avoid increased decoding costs, SCONE retains the original vocabulary while introducing embeddings for a set of frequent nn-grams. These embeddings provide contextualized representation for each input token and are learned with a separate model during training. After training, embeddings are precomputed and stored in off-accelerator memory; during inference, querying them has minimal impact on latency due to the low complexity of embedding lookups. SCONE enables two new scaling strategies: increasing the number of nn-gram embeddings and scaling the model used to learn them, both while maintaining fixed accelerator usage during inference (in terms of FLOPS and memory). We show that scaling both aspects enables a model with 1B accelerator-resident parameters to outperform a 1.9B-parameter baseline across diverse corpora, while using only about half the FLOPS and accelerator memory during inference.

View on arXiv
@article{yu2025_2502.01637,
  title={ Scaling Embedding Layers in Language Models },
  author={ Da Yu and Edith Cohen and Badih Ghazi and Yangsibo Huang and Pritish Kamath and Ravi Kumar and Daogao Liu and Chiyuan Zhang },
  journal={arXiv preprint arXiv:2502.01637},
  year={ 2025 }
}
Comments on this paper