ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.12576
35
0
v1v2 (latest)

Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders

14 June 2025
Ananya Joshi
C. Cintas
Skyler Speakman
ArXiv (abs)PDFHTML
Main:11 Pages
14 Figures
Bibliography:4 Pages
8 Tables
Appendix:4 Pages
Abstract

Recent work shows that Sparse Autoencoders (SAE) applied to large language model (LLM) layers have neurons corresponding to interpretable concepts. These SAE neurons can be modified to align generated outputs, but only towards pre-identified topics and with some parameter tuning. Our approach leverages the observational and modification properties of SAEs to enable alignment for any topic. This method 1) scores each SAE neuron by its semantic similarity to an alignment text and uses them to 2) modify SAE-layer-level outputs by emphasizing topic-aligned neurons. We assess the alignment capabilities of this approach on diverse public topic datasets including Amazon reviews, Medicine, and Sycophancy, across the currently available open-source LLMs and SAE pairs (GPT2 and Gemma) with multiple SAEs configurations. Experiments aligning to medical prompts reveal several benefits over fine-tuning, including increased average language acceptability (0.25 vs. 0.5), reduced training time across multiple alignment topics (333.6s vs. 62s), and acceptable inference time for many applications (+0.00092s/token). Our open-source code is available at this http URL.

View on arXiv
@article{joshi2025_2506.12576,
  title={ Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders },
  author={ Ananya Joshi and Celia Cintas and Skyler Speakman },
  journal={arXiv preprint arXiv:2506.12576},
  year={ 2025 }
}
Comments on this paper