ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.04260
82
0

Steerable Chatbots: Personalizing LLMs with Preference-Based Activation Steering

7 May 2025
Jessica Y. Bo
Tianyu Xu
Ishan Chatterjee
Katrina Passarella-Ward
Achin Kulshrestha
D Shin
    LLMSV
ArXivPDFHTML
Abstract

As large language models (LLMs) improve in their capacity to serve as personal AI assistants, their ability to output uniquely tailored, personalized responses that align with the soft preferences of their users is essential for enhancing user satisfaction and retention. However, untrained lay users have poor prompt specification abilities and often struggle with conveying their latent preferences to AI assistants. To address this, we leverage activation steering to guide LLMs to align with interpretable preference dimensions during inference. In contrast to memory-based personalization methods that require longer user history, steering is extremely lightweight and can be easily controlled by the user via an linear strength factor. We embed steering into three different interactive chatbot interfaces and conduct a within-subjects user study (n=14) to investigate how end users prefer to personalize their conversations. The results demonstrate the effectiveness of preference-based steering for aligning real-world conversations with hidden user preferences, and highlight further insights on how diverse values around control, usability, and transparency lead users to prefer different interfaces.

View on arXiv
@article{bo2025_2505.04260,
  title={ Steerable Chatbots: Personalizing LLMs with Preference-Based Activation Steering },
  author={ Jessica Y. Bo and Tianyu Xu and Ishan Chatterjee and Katrina Passarella-Ward and Achin Kulshrestha and D Shin },
  journal={arXiv preprint arXiv:2505.04260},
  year={ 2025 }
}
Comments on this paper