39
0

Generating Grounded Responses to Counter Misinformation via Learning Efficient Fine-Grained Critiques

Main:10 Pages
3 Figures
Bibliography:3 Pages
6 Tables
Abstract

Fake news and misinformation poses a significant threat to society, making efficient mitigation essential. However, manual fact-checking is costly and lacks scalability. Large Language Models (LLMs) offer promise in automating counter-response generation to mitigate misinformation, but a critical challenge lies in their tendency to hallucinate non-factual information. Existing models mainly rely on LLM self-feedback to reduce hallucination, but this approach is computationally expensive. In this paper, we propose MisMitiFact, Misinformation Mitigation grounded in Facts, an efficient framework for generating fact-grounded counter-responses at scale. MisMitiFact generates simple critique feedback to refine LLM outputs, ensuring responses are grounded in evidence. We develop lightweight, fine-grained critique models trained on data sourced from readily available fact-checking sites to identify and correct errors in key elements such as numerals, entities, and topics in LLM generations. Experiments show that MisMitiFact generates counter-responses of comparable quality to LLMs' self-feedback while using significantly smaller critique models. Importantly, it achieves ~5x increase in feedback generation throughput, making it highly suitable for cost-effective, large-scale misinformation mitigation. Code and LLM prompt templates are atthis https URL.

View on arXiv
@article{xu2025_2506.05924,
  title={ Generating Grounded Responses to Counter Misinformation via Learning Efficient Fine-Grained Critiques },
  author={ Xiaofei Xu and Xiuzhen Zhang and Ke Deng },
  journal={arXiv preprint arXiv:2506.05924},
  year={ 2025 }
}
Comments on this paper