ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15553
7
0

Approximating Language Model Training Data from Weights

18 June 2025
John X. Morris
Junjie Oscar Yin
Woojeong Kim
Vitaly Shmatikov
Alexander M. Rush
Author Contacts:
jxm3@cornell.edu
ArXiv (abs)PDFHTML
Main:11 Pages
5 Figures
Bibliography:4 Pages
5 Tables
Abstract

Modern language models often have open weights but closed training data. We formalize the problem of data approximation from model weights and propose several baselines and metrics. We develop a gradient-based approach that selects the highest-matching data from a large public text corpus and show its effectiveness at recovering useful data given only weights of the original and finetuned models. Even when none of the true training data is known, our method is able to locate a small subset of public Web documents can be used to train a model to close to the original model performance given models trained for both classification and supervised-finetuning. On the AG News classification task, our method improves performance from 65% (using randomly selected data) to 80%, approaching the expert benchmark of 88%. When applied to a model trained with SFT on MSMARCO web documents, our method reduces perplexity from 3.3 to 2.3, compared to an expert LLAMA model's perplexity of 2.0.

View on arXiv
@article{morris2025_2506.15553,
  title={ Approximating Language Model Training Data from Weights },
  author={ John X. Morris and Junjie Oscar Yin and Woojeong Kim and Vitaly Shmatikov and Alexander M. Rush },
  journal={arXiv preprint arXiv:2506.15553},
  year={ 2025 }
}
Comments on this paper