2D Matryoshka training enables a single embedding model to generate sub-network representations across different layers and embedding dimensions, offering adaptability to diverse computational and task constraints. However, its effectiveness remains well below that of individually trained models of equivalent sizes. To address this, we propose Starbucks, a new training strategy for Matryoshka-style embedding models that combines structured fine-tuning with masked autoencoder (MAE) pre-training. During fine-tuning, we compute the loss over a fixed set of layer-dimension pairs, from small to large, which significantly improves performance over randomly sampled sub-networks and matches that of separately trained models. Our MAE-based pre-training further enhances the representation quality of sub-networks, providing a stronger backbone for downstream tasks. Experiments on both in-domain (semantic similarity and passage retrieval) and out-of-domain (BEIR) benchmarks show that Starbucks consistently outperforms 2D Matryoshka models and matches or exceeds the performance of individually trained models, while maintaining high efficiency and adaptability. Ablation studies confirm our loss design choices, the impact of SMAE pre-training and demonstrate the applicability of Starbucks across backbones. We further show that depth- and width-wise Starbucks variants capture complementary information, and that their hybridization yields additional performance gains with minimal latency overhead due to parallelization. Code available atthis https URL
View on arXiv@article{zhuang2025_2410.13230, title={ Starbucks-v2: Improved Training for 2D Matryoshka Embeddings }, author={ Shengyao Zhuang and Shuai Wang and Fabio Zheng and Bevan Koopman and Guido Zuccon }, journal={arXiv preprint arXiv:2410.13230}, year={ 2025 } }