ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07784
38
0

Domain Regeneration: How well do LLMs match syntactic properties of text domains?

12 May 2025
Da Ju
Hagen Blix
Adina Williams
    DeLMO
ArXivPDFHTML
Abstract

Recent improvement in large language model performance have, in all likelihood, been accompanied by improvement in how well they can approximate the distribution of their training data. In this work, we explore the following question: which properties of text domains do LLMs faithfully approximate, and how well do they do so? Applying observational approaches familiar from corpus linguistics, we prompt a commonly used, opensource LLM to regenerate text from two domains of permissively licensed English text which are often contained in LLM training data -- Wikipedia and news text. This regeneration paradigm allows us to investigate whether LLMs can faithfully match the original human text domains in a fairly semantically-controlled setting. We investigate varying levels of syntactic abstraction, from more simple properties like sentence length, and article readability, to more complex and higher order properties such as dependency tag distribution, parse depth, and parse complexity. We find that the majority of the regenerated distributions show a shifted mean, a lower standard deviation, and a reduction of the long tail, as compared to the human originals.

View on arXiv
@article{ju2025_2505.07784,
  title={ Domain Regeneration: How well do LLMs match syntactic properties of text domains? },
  author={ Da Ju and Hagen Blix and Adina Williams },
  journal={arXiv preprint arXiv:2505.07784},
  year={ 2025 }
}
Comments on this paper