Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?

While metrics available during pre-training, such as perplexity, correlate well with model performance at scaling-laws studies, their predictive capacities at a fixed model size remain unclear, hindering effective model selection and development. To address this gap, we formulate the task of selecting pre-training checkpoints to maximize downstream fine-tuning performance as a pairwise classification problem: predicting which of two LLMs, differing in their pre-training, will perform better after supervised fine-tuning (SFT). We construct a dataset using 50 1B parameter LLM variants with systematically varied pre-training configurations, e.g., objectives or data, and evaluate them on diverse downstream tasks after SFT. We first conduct a study and demonstrate that the conventional perplexity is a misleading indicator. As such, we introduce novel unsupervised and supervised proxy metrics derived from pre-training that successfully reduce the relative performance prediction error rate by over 50%. Despite the inherent complexity of this task, we demonstrate the practical utility of our proposed proxies in specific scenarios, paving the way for more efficient design of pre-training schemes optimized for various downstream tasks.
View on arXiv@article{zeng2025_2504.12491, title={ Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs? }, author={ Hansi Zeng and Kai Hui and Honglei Zhuang and Zhen Qin and Zhenrui Yue and Hamed Zamani and Dana Alon }, journal={arXiv preprint arXiv:2504.12491}, year={ 2025 } }