ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12101
35
3

Better RAG using Relevant Information Gain

16 July 2024
Marc Pickett
Jeremy Hartman
Ayan Kumar Bhowmick
Raquib-ul Alam
Aditya Vempaty
    RALM
ArXivPDFHTML
Abstract

A common way to extend the memory of large language models (LLMs) is by retrieval augmented generation (RAG), which inserts text retrieved from a larger memory into an LLM's context window. However, the context window is typically limited to several thousand tokens, which limits the number of retrieved passages that can inform a model's response. For this reason, it's important to avoid occupying context window space with redundant information by ensuring a degree of diversity among retrieved passages. At the same time, the information should also be relevant to the current task. Most prior methods that encourage diversity among retrieved results, such as Maximal Marginal Relevance (MMR), do so by incorporating an objective that explicitly trades off diversity and relevance. We propose a novel simple optimization metric based on relevant information gain, a probabilistic measure of the total information relevant to a query for a set of retrieved results. By optimizing this metric, diversity organically emerges from our system. When used as a drop-in replacement for the retrieval component of a RAG system, this method yields state-of-the-art performance on question answering tasks from the Retrieval Augmented Generation Benchmark (RGB), outperforming existing metrics that directly optimize for relevance and diversity.

View on arXiv
@article{pickett2025_2407.12101,
  title={ Better RAG using Relevant Information Gain },
  author={ Marc Pickett and Jeremy Hartman and Ayan Kumar Bhowmick and Raquib-ul Alam and Aditya Vempaty },
  journal={arXiv preprint arXiv:2407.12101},
  year={ 2025 }
}
Comments on this paper