ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13691
77
0

Is This Collection Worth My LLM's Time? Automatically Measuring Information Potential in Text Corpora

19 February 2025
Tristan Karch
Luca Engel
Philippe Schwaller
Frédéric Kaplan
ArXivPDFHTML
Abstract

As large language models (LLMs) converge towards similar capabilities, the key to advancing their performance lies in identifying and incorporating valuable new information sources. However, evaluating which text collections are worth the substantial investment required for digitization, preprocessing, and integration into LLM systems remains a significant challenge. We present a novel approach to this challenge: an automated pipeline that evaluates the potential information gain from text collections without requiring model training or fine-tuning. Our method generates multiple choice questions (MCQs) from texts and measures an LLM's performance both with and without access to the source material. The performance gap between these conditions serves as a proxy for the collection's information potential. We validate our approach using five strategically selected datasets: EPFL PhD manuscripts, a private collection of Venetian historical records, two sets of Wikipedia articles on related topics, and a synthetic baseline dataset. Our results demonstrate that this method effectively identifies collections containing valuable novel information, providing a practical tool for prioritizing data acquisition and integration efforts.

View on arXiv
@article{karch2025_2502.13691,
  title={ Is This Collection Worth My LLM's Time? Automatically Measuring Information Potential in Text Corpora },
  author={ Tristan Karch and Luca Engel and Philippe Schwaller and Frédéric Kaplan },
  journal={arXiv preprint arXiv:2502.13691},
  year={ 2025 }
}
Comments on this paper