93
0

RIVAL: Reinforcement Learning with Iterative and Adversarial Optimization for Machine Translation

Abstract

Large language models (LLMs) possess strong multilingual capabilities, and combining Reinforcement Learning from Human Feedback (RLHF) with translation tasks has shown great potential. However, we observe that this paradigm performs unexpectedly poorly when applied to colloquial subtitle translation tasks. In this work, we investigate this issue and find that the offline reward model (RM) gradually diverges from the online LLM due to distributional shift, ultimately leading to undesirable training outcomes. To address this, we propose RIVAL, an adversarial training framework that formulates the process as a min-max game between the RM and the LLM. RIVAL iteratively updates the both models, with the RM trained to distinguish strong from weak translations (qualitative preference reward), and the LLM trained to enhance its translation for closing this gap. To stabilize training and improve generalizability, we also incorporate quantitative preference reward (e.g., BLEU) into the RM, enabling reference-free quality modeling aligned with human evaluation. Through extensive experiments, we demonstrate that the proposed adversarial training framework significantly improves upon translation baselines.

View on arXiv
@article{li2025_2506.05070,
  title={ RIVAL: Reinforcement Learning with Iterative and Adversarial Optimization for Machine Translation },
  author={ Tianjiao Li and Mengran Yu and Chenyu Shi and Yanjun Zhao and Xiaojing Liu and Qiang Zhang and Qi Zhang and Xuanjing Huang and Jiayin Wang },
  journal={arXiv preprint arXiv:2506.05070},
  year={ 2025 }
}
Comments on this paper