ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22637
21
0

Understanding (Un)Reliability of Steering Vectors in Language Models

28 May 2025
Joschka Braun
Carsten Eickhoff
David M. Krueger
Seyed Ali Bahrainian
Dmitrii Krasheninnikov
    LLMSV
ArXivPDFHTML
Abstract

Steering vectors are a lightweight method to control language model behavior by adding a learned bias to the activations at inference time. Although steering demonstrates promising performance, recent work shows that it can be unreliable or even counterproductive in some cases. This paper studies the influence of prompt types and the geometry of activation differences on steering reliability. First, we find that all seven prompt types used in our experiments produce a net positive steering effect, but exhibit high variance across samples, and often give an effect opposite of the desired one. No prompt type clearly outperforms the others, and yet the steering vectors resulting from the different prompt types often differ directionally (as measured by cosine similarity). Second, we show that higher cosine similarity between training set activation differences predicts more effective steering. Finally, we observe that datasets where positive and negative activations are better separated are more steerable. Our results suggest that vector steering is unreliable when the target behavior is not represented by a coherent direction.

View on arXiv
@article{braun2025_2505.22637,
  title={ Understanding (Un)Reliability of Steering Vectors in Language Models },
  author={ Joschka Braun and Carsten Eickhoff and David Krueger and Seyed Ali Bahrainian and Dmitrii Krasheninnikov },
  journal={arXiv preprint arXiv:2505.22637},
  year={ 2025 }
}
Comments on this paper