ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06292
22
0
v1v2 (latest)

Mutual-Taught for Co-adapting Policy and Reward Models

17 May 2025
Tianyuan Shi
Canbin Huang
Fanqi Wan
Longguang Zhong
Ziyi Yang
Weizhou Shen
Xiaojun Quan
Ming Yan
ArXiv (abs)PDFHTML
Main:7 Pages
7 Figures
Bibliography:3 Pages
3 Tables
Appendix:4 Pages
Abstract

During the preference optimization of large language models (LLMs), distribution shifts may arise between newly generated model samples and the data used to train the reward model (RM). This shift reduces the efficacy of the RM, which in turn negatively impacts the performance of the policy model (PM). To address this challenge, we propose Mutual-Taught, a self-training method that iteratively improves both the PM and RM without requiring additional human annotation. Our approach mirrors the expectation-maximization (EM) algorithm. In the E-step, the PM is updated using feedback from the current RM, guiding the PM toward a better approximation of the latent optimal preference distribution. In the M-step, we update the RM by constructing training data from the outputs of the PM before and after the E-step update. This process ensures that the RM adapts to the evolving policy distribution. Experimental results demonstrate that this iterative approach leads to consistent improvements in both models. Specifically, our 8B policy model, LLaMA-3-8B-Instruct-MT, achieves a length-controlled win rate of 54.1\% on AlpacaEval-2, while our 8B reward model, FsfairX-LLaMA3-RM-MT, performs on par with GPT-4o-2024-08-06 on RewardBench.

View on arXiv
@article{shi2025_2506.06292,
  title={ Mutual-Taught for Co-adapting Policy and Reward Models },
  author={ Tianyuan Shi and Canbin Huang and Fanqi Wan and Longguang Zhong and Ziyi Yang and Weizhou Shen and Xiaojun Quan and Ming Yan },
  journal={arXiv preprint arXiv:2506.06292},
  year={ 2025 }
}
Comments on this paper