ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.05111
41
0

Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders

8 May 2025
Boyi Deng
Boyi Deng
Yidan Zhang
Baosong Yang
Fuli Feng
ArXivPDFHTML
Abstract

The mechanisms behind multilingual capabilities in Large Language Models (LLMs) have been examined using neuron-based or internal-activation-based methods. However, these methods often face challenges such as superposition and layer-wise activation variance, which limit their reliability. Sparse Autoencoders (SAEs) offer a more nuanced analysis by decomposing the activations of LLMs into sparse linear combination of SAE features. We introduce a novel metric to assess the monolinguality of features obtained from SAEs, discovering that some features are strongly related to specific languages. Additionally, we show that ablating these SAE features only significantly reduces abilities in one language of LLMs, leaving others almost unaffected. Interestingly, we find some languages have multiple synergistic SAE features, and ablating them together yields greater improvement than ablating individually. Moreover, we leverage these SAE-derived language-specific features to enhance steering vectors, achieving control over the language generated by LLMs.

View on arXiv
@article{deng2025_2505.05111,
  title={ Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders },
  author={ Boyi Deng and Yu Wan and Yidan Zhang and Baosong Yang and Fuli Feng },
  journal={arXiv preprint arXiv:2505.05111},
  year={ 2025 }
}
Comments on this paper