ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12584
7
0

Improving Multilingual Language Models by Aligning Representations through Steering

19 May 2025
Omar Mahmoud
B. L. Semage
Thommen George Karimpanal
Santu Rana
    LLMSV
ArXivPDFHTML
Abstract

In this paper, we investigate how large language models (LLMS) process non-English tokens within their layer representations, an open question despite significant advancements in the field. Using representation steering, specifically by adding a learned vector to a single model layer's activations, we demonstrate that steering a single model layer can notably enhance performance. Our analysis shows that this approach achieves results comparable to translation baselines and surpasses state of the art prompt optimization methods. Additionally, we highlight how advanced techniques like supervised fine tuning (\textsc{sft}) and reinforcement learning from human feedback (\textsc{rlhf}) improve multilingual capabilities by altering representation spaces. We further illustrate how these methods align with our approach to reshaping LLMS layer representations.

View on arXiv
@article{mahmoud2025_2505.12584,
  title={ Improving Multilingual Language Models by Aligning Representations through Steering },
  author={ Omar Mahmoud and Buddhika Laknath Semage and Thommen George Karimpanal and Santu Rana },
  journal={arXiv preprint arXiv:2505.12584},
  year={ 2025 }
}
Comments on this paper