Modern language models often have open weights but closed training data. We formalize the problem of data approximation from model weights and propose several baselines and metrics. We develop a gradient-based approach that selects the highest-matching data from a large public text corpus and show its effectiveness at recovering useful data given only weights of the original and finetuned models. Even when none of the true training data is known, our method is able to locate a small subset of public Web documents can be used to train a model to close to the original model performance given models trained for both classification and supervised-finetuning. On the AG News classification task, our method improves performance from 65% (using randomly selected data) to 80%, approaching the expert benchmark of 88%. When applied to a model trained with SFT on MSMARCO web documents, our method reduces perplexity from 3.3 to 2.3, compared to an expert LLAMA model's perplexity of 2.0.
View on arXiv@article{morris2025_2506.15553, title={ Approximating Language Model Training Data from Weights }, author={ John X. Morris and Junjie Oscar Yin and Woojeong Kim and Vitaly Shmatikov and Alexander M. Rush }, journal={arXiv preprint arXiv:2506.15553}, year={ 2025 } }