ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.05808
32
20

Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning

8 February 2024
Zhiheng Xi
Wenxiang Chen
Boyang Hong
Senjie Jin
Rui Zheng
Wei He
Yiwen Ding
Shichun Liu
Xin Guo
Junzhe Wang
Honglin Guo
Wei Shen
Xiaoran Fan
Yuhao Zhou
Shihan Dou
Xiao Wang
Xinbo Zhang
Peng Sun
Tao Gui
Qi Zhang
Xuanjing Huang
    LRM
ArXivPDFHTML
Abstract

In this paper, we propose R3^33: Learning Reasoning through Reverse Curriculum Reinforcement Learning (RL), a novel method that employs only outcome supervision to achieve the benefits of process supervision for large language models. The core challenge in applying RL to complex reasoning is to identify a sequence of actions that result in positive rewards and provide appropriate supervision for optimization. Outcome supervision provides sparse rewards for final results without identifying error locations, whereas process supervision offers step-wise rewards but requires extensive manual annotation. R3^33 overcomes these limitations by learning from correct demonstrations. Specifically, R3^33 progressively slides the start state of reasoning from a demonstration's end to its beginning, facilitating easier model exploration at all stages. Thus, R3^33 establishes a step-wise curriculum, allowing outcome supervision to offer step-level signals and precisely pinpoint errors. Using Llama2-7B, our method surpasses RL baseline on eight reasoning tasks by 4.14.14.1 points on average. Notebaly, in program-based reasoning on GSM8K, it exceeds the baseline by 4.24.24.2 points across three backbone models, and without any extra data, Codellama-7B + R3^33 performs comparable to larger models or closed-source models.

View on arXiv
Comments on this paper