Recent Large Reasoning Models (LRMs) with thinking traces have shown strong performance on English reasoning tasks. However, their ability to think in other languages is less studied. This capability is as important as answer accuracy for real world applications because users may find the reasoning trace useful for oversight only when it is expressed in their own language. We comprehensively evaluate two leading families of LRMs on our XReasoning benchmark and find that even the most advanced models often revert to English or produce fragmented reasoning in other languages, revealing a substantial gap in multilingual reasoning. Prompt based interventions that force models to reason in the users language improve readability and oversight but reduce answer accuracy, exposing an important trade off. We further show that targeted post training on just 100 examples mitigates this mismatch, though some accuracy loss remains. Our results highlight the limited multilingual reasoning capabilities of current LRMs and outline directions for future work. Code and data are available atthis https URL.
View on arXiv@article{qi2025_2505.22888, title={ When Models Reason in Your Language: Controlling Thinking Trace Language Comes at the Cost of Accuracy }, author={ Jirui Qi and Shan Chen and Zidi Xiong and Raquel Fernández and Danielle S. Bitterman and Arianna Bisazza }, journal={arXiv preprint arXiv:2505.22888}, year={ 2025 } }