ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.02382
40
0

An Efficient and Precise Training Data Construction Framework for Process-supervised Reward Model in Mathematical Reasoning

4 March 2025
Wei Sun
Qianlong Du
Fuwei Cui
Jiajun Zhang
    OffRL
    LRM
ArXivPDFHTML
Abstract

Enhancing the mathematical reasoning capabilities of Large Language Models (LLMs) is of great scientific and practical significance. Researchers typically employ process-supervised reward models (PRMs) to guide the reasoning process, effectively improving the models' reasoning abilities. However, existing methods for constructing process supervision training data, such as manual annotation and per-step Monte Carlo estimation, are often costly or suffer from poor quality. To address these challenges, this paper introduces a framework called EpicPRM, which annotates each intermediate reasoning step based on its quantified contribution and uses an adaptive binary search algorithm to enhance both annotation precision and efficiency. Using this approach, we efficiently construct a high-quality process supervision training dataset named Epic50k, consisting of 50k annotated intermediate steps. Compared to other publicly available datasets, the PRM trained on Epic50k demonstrates significantly superior performance. Getting Epic50k atthis https URL.

View on arXiv
@article{sun2025_2503.02382,
  title={ An Efficient and Precise Training Data Construction Framework for Process-supervised Reward Model in Mathematical Reasoning },
  author={ Wei Sun and Qianlong Du and Fuwei Cui and Jiajun Zhang },
  journal={arXiv preprint arXiv:2503.02382},
  year={ 2025 }
}
Comments on this paper