In this paper, we investigate how large language models (LLMS) process non-English tokens within their layer representations, an open question despite significant advancements in the field. Using representation steering, specifically by adding a learned vector to a single model layer's activations, we demonstrate that steering a single model layer can notably enhance performance. Our analysis shows that this approach achieves results comparable to translation baselines and surpasses state of the art prompt optimization methods. Additionally, we highlight how advanced techniques like supervised fine tuning (\textsc{sft}) and reinforcement learning from human feedback (\textsc{rlhf}) improve multilingual capabilities by altering representation spaces. We further illustrate how these methods align with our approach to reshaping LLMS layer representations.
View on arXiv@article{mahmoud2025_2505.12584, title={ Improving Multilingual Language Models by Aligning Representations through Steering }, author={ Omar Mahmoud and Buddhika Laknath Semage and Thommen George Karimpanal and Santu Rana }, journal={arXiv preprint arXiv:2505.12584}, year={ 2025 } }