100

Self-Improving Multilingual Long Reasoning via Translation-Reasoning Integrated Training

Junxiao Liu
Zhijun Wang
Yixiao Li
Zhejian Lai
Liqian Huang
Xin Huang
Xue Han
Junlan Feng
Shujian Huang
Main:8 Pages
12 Figures
Bibliography:2 Pages
6 Tables
Appendix:6 Pages
Abstract

Long reasoning models often struggle in multilingual settings: they tend to reason in English for non-English questions; when constrained to reasoning in the question language, accuracies drop substantially. The struggle is caused by the limited abilities for both multilingual question understanding and multilingual reasoning. To address both problems, we propose TRIT (Translation-Reasoning Integrated Training), a self-improving framework that integrates the training of translation into multilingual reasoning. Without external feedback or additional multilingual data, our method jointly enhances multilingual question understanding and response generation. On MMATH, our method outperforms multiple baselines by an average of 7 percentage points, improving both answer correctness and language consistency. Further analysis reveals that integrating translation training improves cross-lingual question alignment by over 10 percentage points and enhances translation quality for both mathematical questions and general-domain text, with gains up to 8.4 COMET points on FLORES-200.

View on arXiv
Comments on this paper