Industry-scale recommender systems face a core challenge: representing entities with high cardinality, such as users or items, using dense embeddings that must be accessible during both training and inference. However, as embedding sizes grow, memory constraints make storage and access increasingly difficult. We describe a lightweight, learnable embedding compression technique that projects dense embeddings into a high-dimensional, sparsely activated space. Designed for retrieval tasks, our method reduces memory requirements while preserving retrieval performance, enabling scalable deployment under strict resource constraints. Our results demonstrate that leveraging sparsity is a promising approach for improving the efficiency of large-scale recommenders. We release our code atthis https URL.
View on arXiv@article{kasalický2025_2505.11388, title={ The Future is Sparse: Embedding Compression for Scalable Retrieval in Recommender Systems }, author={ Petr Kasalický and Martin Spišák and Vojtěch Vančura and Daniel Bohuněk and Rodrigo Alves and Pavel Kordík }, journal={arXiv preprint arXiv:2505.11388}, year={ 2025 } }