ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24630
37
0

The Hallucination Dilemma: Factuality-Aware Reinforcement Learning for Large Reasoning Models

30 May 2025
Junyi Li
Hwee Tou Ng
    OffRL
    HILM
    LRM
ArXivPDFHTML
Abstract

Large language models (LLMs) have significantly advanced in reasoning tasks through reinforcement learning (RL) optimization, achieving impressive capabilities across various challenging benchmarks. However, our empirical analysis reveals a critical drawback: reasoning-oriented RL fine-tuning significantly increases the prevalence of hallucinations. We theoretically analyze the RL training dynamics, identifying high-variance gradient, entropy-induced randomness, and susceptibility to spurious local optima as key factors leading to hallucinations. To address this drawback, we propose Factuality-aware Step-wise Policy Optimization (FSPO), an innovative RL fine-tuning algorithm incorporating explicit factuality verification at each reasoning step. FSPO leverages automated verification against given evidence to dynamically adjust token-level advantage values, incentivizing factual correctness throughout the reasoning process. Experiments across mathematical reasoning and hallucination benchmarks using Qwen2.5 and Llama models demonstrate that FSPO effectively reduces hallucinations while enhancing reasoning accuracy, substantially improving both reliability and performance.

View on arXiv
@article{li2025_2505.24630,
  title={ The Hallucination Dilemma: Factuality-Aware Reinforcement Learning for Large Reasoning Models },
  author={ Junyi Li and Hwee Tou Ng },
  journal={arXiv preprint arXiv:2505.24630},
  year={ 2025 }
}
Comments on this paper