How Significant Are the Real Performance Gains? An Unbiased Evaluation Framework for GraphRAG

By retrieving contexts from knowledge graphs, graph-based retrieval-augmented generation (GraphRAG) enhances large language models (LLMs) to generate quality answers for user questions. Many GraphRAG methods have been proposed and reported inspiring performance in answer quality. However, we observe that the current answer evaluation framework for GraphRAG has two critical flaws, i.e., unrelated questions and evaluation biases, which may lead to biased or even wrong conclusions on performance. To tackle the two flaws, we propose an unbiased evaluation framework that uses graph-text-grounded question generation to produce questions that are more related to the underlying dataset and an unbiased evaluation procedure to eliminate the biases in LLM-based answer assessment. We apply our unbiased framework to evaluate 3 representative GraphRAG methods and find that their performance gains are much more moderate than reported previously. Although our evaluation framework may still have flaws, it calls for scientific evaluations to lay solid foundations for GraphRAG research.
View on arXiv@article{zeng2025_2506.06331, title={ How Significant Are the Real Performance Gains? An Unbiased Evaluation Framework for GraphRAG }, author={ Qiming Zeng and Xiao Yan and Hao Luo and Yuhao Lin and Yuxiang Wang and Fangcheng Fu and Bo Du and Quanqing Xu and Jiawei Jiang }, journal={arXiv preprint arXiv:2506.06331}, year={ 2025 } }