18

Gained in Translation: Privileged Pairwise Judges Enhance Multilingual Reasoning

Lintang Sutawika
Gokul Swamy
Zhiwei Steven Wu
Graham Neubig
Main:3 Pages
7 Figures
Bibliography:1 Pages
12 Tables
Appendix:17 Pages
Abstract

When asked a question in a language less seen in its training data, current reasoning large language models (RLMs) often exhibit dramatically lower performance than when asked the same question in English. In response, we introduce \texttt{SP3F} (Self-Play with Privileged Pairwise Feedback), a two-stage framework for enhancing multilingual reasoning without \textit{any} data in the target language(s). First, we supervise fine-tune (SFT) on translated versions of English question-answer pairs to raise base model correctness. Second, we perform RL with feedback from a pairwise judge in a self-play fashion, with the judge receiving the English reference response as \textit{privileged information}. Thus, even when none of the model's responses are completely correct, the privileged pairwise judge can still tell which response is better. End-to-end, \texttt{SP3F} greatly improves base model performance, even outperforming fully post-trained models on multiple math and non-math tasks with less thanof the training data across the single-language, multilingual, and generalization to unseen language settings.

View on arXiv
Comments on this paper