ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19862
24
0

REA-RL: Reflection-Aware Online Reinforcement Learning for Efficient Large Reasoning Models

26 May 2025
Hexuan Deng
Wenxiang Jiao
Xuebo Liu
Jun Rao
Min Zhang
    OffRL
    LRM
ArXivPDFHTML
Abstract

Large Reasoning Models (LRMs) demonstrate strong performance in complex tasks but often face the challenge of overthinking, leading to substantially high inference costs. Existing approaches synthesize shorter reasoning responses for LRMs to learn, but are inefficient for online usage due to the time-consuming data generation and filtering processes. Meanwhile, online reinforcement learning mainly adopts a length reward to encourage short reasoning responses, but tends to lose the reflection ability and harm the performance. To address these issues, we propose REA-RL, which introduces a small reflection model for efficient scaling in online training, offering both parallel sampling and sequential revision. Besides, a reflection reward is designed to further prevent LRMs from favoring short yet non-reflective responses. Experiments show that both methods maintain or enhance performance while significantly improving inference efficiency. Their combination achieves a good balance between performance and efficiency, reducing inference costs by 35% without compromising performance. Further analysis demonstrates that our methods are effective by maintaining reflection frequency for hard problems while appropriately reducing it for simpler ones without losing reflection ability. Codes are available atthis https URL.

View on arXiv
@article{deng2025_2505.19862,
  title={ REA-RL: Reflection-Aware Online Reinforcement Learning for Efficient Large Reasoning Models },
  author={ Hexuan Deng and Wenxiang Jiao and Xuebo Liu and Jun Rao and Min Zhang },
  journal={arXiv preprint arXiv:2505.19862},
  year={ 2025 }
}
Comments on this paper