Exploring Training and Inference Scaling Laws in Generative Retrieval

Generative retrieval reformulates retrieval as an autoregressive generation task, where large language models (LLMs) generate target documents directly from a query. As a novel paradigm, the mechanisms that underpin its performance and scalability remain largely unexplored. We systematically investigate training and inference scaling laws in generative retrieval, exploring how model size, training data scale, and inference-time compute jointly influence performance. We propose a novel evaluation metric inspired by contrastive entropy and generation loss, providing a continuous performance signal that enables robust comparisons across diverse generative retrieval methods. Our experiments show that n-gram-based methods align strongly with training and inference scaling laws. We find that increasing model size, training data scale, and inference-time compute all contribute to improved performance, highlighting the complementary roles of these factors in enhancing generative retrieval. Across these settings, LLaMA models consistently outperform T5 models, suggesting a particular advantage for larger decoder-only models in generative retrieval. Our findings underscore that model sizes, data availability, and inference computation interact to unlock the full potential of generative retrieval, offering new insights for designing and optimizing future systems.
View on arXiv@article{cai2025_2503.18941, title={ Exploring Training and Inference Scaling Laws in Generative Retrieval }, author={ Hongru Cai and Yongqi Li and Ruifeng Yuan and Wenjie Wang and Zhen Zhang and Wenjie Li and Tat-Seng Chua }, journal={arXiv preprint arXiv:2503.18941}, year={ 2025 } }