25
0

Improving Factuality for Dialogue Response Generation via Graph-Based Knowledge Augmentation

Main:8 Pages
2 Figures
Bibliography:2 Pages
12 Tables
Appendix:5 Pages
Abstract

Large Language Models (LLMs) succeed in many natural language processing tasks. However, their tendency to hallucinate - generate plausible but inconsistent or factually incorrect text - can cause problems in certain tasks, including response generation in dialogue. To mitigate this issue, knowledge-augmented methods have shown promise in reducing hallucinations. Here, we introduce a novel framework designed to enhance the factuality of dialogue response generation, as well as an approach to evaluate dialogue factual accuracy. Our framework combines a knowledge triple retriever, a dialogue rewrite, and knowledge-enhanced response generation to produce more accurate and grounded dialogue responses. To further evaluate generated responses, we propose a revised fact score that addresses the limitations of existing fact-score methods in dialogue settings, providing a more reliable assessment of factual consistency. We evaluate our methods using different baselines on the OpendialKG and HybriDialogue datasets. Our methods significantly improve factuality compared to other graph knowledge-augmentation baselines, including the state-of-the-art G-retriever. The code will be released on GitHub.

View on arXiv
@article{chen2025_2506.12496,
  title={ Improving Factuality for Dialogue Response Generation via Graph-Based Knowledge Augmentation },
  author={ Xiangyan Chen and Yujian Gan and Matthew Purver },
  journal={arXiv preprint arXiv:2506.12496},
  year={ 2025 }
}
Comments on this paper