ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.00020
55
0

Beyond Public Access in LLM Pre-Training Data

24 April 2025
Sruly Rosenblat
Tim O'Reilly
Ilan Strauss
    MLAU
ArXivPDFHTML
Abstract

Using a legally obtained dataset of 34 copyrighted O'Reilly Media books, we apply the DE-COP membership inference attack method to investigate whether OpenAI's large language models were trained on copyrighted content without consent. Our AUROC scores show that GPT-4o, OpenAI's more recent and capable model, demonstrates strong recognition of paywalled O'Reilly book content (AUROC = 82\%), compared to OpenAI's earlier model GPT-3.5 Turbo. In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O'Reilly book samples. GPT-4o Mini, as a much smaller model, shows no knowledge of public or non-public O'Reilly Media content when tested (AUROC ≈\approx≈ 50\%). Testing multiple models, with the same cutoff date, helps us account for potential language shifts over time that might bias our findings. These results highlight the urgent need for increased corporate transparency regarding pre-training data sources as a means to develop formal licensing frameworks for AI content training

View on arXiv
@article{rosenblat2025_2505.00020,
  title={ Beyond Public Access in LLM Pre-Training Data },
  author={ Sruly Rosenblat and Tim O'Reilly and Ilan Strauss },
  journal={arXiv preprint arXiv:2505.00020},
  year={ 2025 }
}
Comments on this paper