ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11106
19
0

Graph-based RAG Enhancement via Global Query Disambiguation and Dependency-Aware Reranking

7 June 2025
Ningyuan Li
Junrui Liu
Yi Shan
Minghui Huang
Tong Li
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:2 Pages
4 Tables
Appendix:2 Pages
Abstract

Contemporary graph-based retrieval-augmented generation (RAG) methods typically begin by extracting entities from user queries and then leverage pre-constructed knowledge graphs to retrieve related relationships and metadata. However, this pipeline's exclusive reliance on entity-level extraction can lead to the misinterpretation or omission of latent yet critical information and relations. As a result, retrieved content may be irrelevant or contradictory, and essential knowledge may be excluded, exacerbating hallucination risks and degrading the fidelity of generated responses. To address these limitations, we introduce PankRAG, a framework that combines a globally aware, hierarchical query-resolution strategy with a novel dependency-aware reranking mechanism. PankRAG first constructs a multi-level resolution path that captures both parallel and sequential interdependencies within a query, guiding large language models (LLMs) through structured reasoning. It then applies its dependency-aware reranker to exploit the dependency structure among resolved sub-questions, enriching and validating retrieval results for subsequent sub-questions. Empirical evaluations demonstrate that PankRAG consistently outperforms state-of-the-art approaches across multiple benchmarks, underscoring its robustness and generalizability.

View on arXiv
@article{li2025_2506.11106,
  title={ Graph-based RAG Enhancement via Global Query Disambiguation and Dependency-Aware Reranking },
  author={ Ningyuan Li and Junrui Liu and Yi Shan and Minghui Huang and Tong Li },
  journal={arXiv preprint arXiv:2506.11106},
  year={ 2025 }
}
Comments on this paper