ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.02922
44
0

Optimizing open-domain question answering with graph-based retrieval augmented generation

4 March 2025
Joyce Cahoon
Prerna Singh
Nick Litombe
Jonathan Larson
Ha Trinh
Yiwen Zhu
A. Mueller
Fotis Psallidas
Carlo Curino
ArXivPDFHTML
Abstract

In this work, we benchmark various graph-based retrieval-augmented generation (RAG) systems across a broad spectrum of query types, including OLTP-style (fact-based) and OLAP-style (thematic) queries, to address the complex demands of open-domain question answering (QA). Traditional RAG methods often fall short in handling nuanced, multi-document synthesis tasks. By structuring knowledge as graphs, we can facilitate the retrieval of context that captures greater semantic depth and enhances language model operations. We explore graph-based RAG methodologies and introduce TREX, a novel, cost-effective alternative that combines graph-based and vector-based retrieval techniques. Our benchmarking across four diverse datasets highlights the strengths of different RAG methodologies, demonstrates TREX's ability to handle multiple open-domain QA types, and reveals the limitations of current evaluation methods.In a real-world technical support case study, we demonstrate how TREX solutions can surpass conventional vector-based RAG in efficiently synthesizing data from heterogeneous sources. Our findings underscore the potential of augmenting large language models with advanced retrieval and orchestration capabilities, advancing scalable, graph-based AI solutions.

View on arXiv
@article{cahoon2025_2503.02922,
  title={ Optimizing open-domain question answering with graph-based retrieval augmented generation },
  author={ Joyce Cahoon and Prerna Singh and Nick Litombe and Jonathan Larson and Ha Trinh and Yiwen Zhu and Andreas Mueller and Fotis Psallidas and Carlo Curino },
  journal={arXiv preprint arXiv:2503.02922},
  year={ 2025 }
}
Comments on this paper