ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.17219
23
0
v1v2 (latest)

No Free Lunch: Rethinking Internal Feedback for LLM Reasoning

20 June 2025
Yanzhi Zhang
Zhaoxi Zhang
Haoxiang Guan
Yilin Cheng
Yitong Duan
Chen Wang
Yue Wang
Shuxin Zheng
Jiyan He
    ReLMLRM
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:4 Pages
5 Tables
Appendix:11 Pages
Abstract

Reinforcement learning has emerged as a powerful paradigm for post-training large language models (LLMs) to improve reasoning. Approaches like Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning with Verifiable Rewards (RLVR) have shown strong results, but they require extensive external supervision. We investigate an alternative class of methods, Reinforcement Learning from Internal Feedback (RLIF), which relies solely on intrinsic model-derived signals instead of external rewards. In particular, we leverage unsupervised reward proxies such as token-level entropy, trajectory-level entropy, and self-certainty. Our theoretical analysis shows these internal objectives are partially equivalent, and we empirically evaluate various RLIF strategies on challenging math reasoning benchmarks. Experimental results demonstrate that RLIF can boost the reasoning performance of base LLMs at the beginning phase of the training, matching or surpassing RLVR techniques on these tasks. However, when training progresses, performance degrades even below the model before training. Moreover, we find that RLIF yields little improvement for instruction-tuned models, indicating diminishing returns of intrinsic feedback once an LLM is already instruction-tuned. We further analyze this limitation by mixing model weights and explain the reason of RLIF's training behaviors, providing practical guidelines for integrating internal feedback signals into LLM training. We hope our analysis of internal feedback will inform more principled and effective strategies for LLM post-training.

View on arXiv
@article{zhang2025_2506.17219,
  title={ No Free Lunch: Rethinking Internal Feedback for LLM Reasoning },
  author={ Yanzhi Zhang and Zhaoxi Zhang and Haoxiang Guan and Yilin Cheng and Yitong Duan and Chen Wang and Yue Wang and Shuxin Zheng and Jiyan He },
  journal={arXiv preprint arXiv:2506.17219},
  year={ 2025 }
}
Comments on this paper