Beyond Input Activations: Identifying Influential Latents by Gradient Sparse Autoencoders

Sparse Autoencoders (SAEs) have recently emerged as powerful tools for interpreting and steering the internal representations of large language models (LLMs). However, conventional approaches to analyzing SAEs typically rely solely on input-side activations, without considering the causal influence between each latent feature and the model's output. This work is built on two key hypotheses: (1) activated latents do not contribute equally to the construction of the model's output, and (2) only latents with high causal influence are effective for model steering. To validate these hypotheses, we propose Gradient Sparse Autoencoder (GradSAE), a simple yet effective method that identifies the most influential latents by incorporating output-side gradient information.
View on arXiv@article{shu2025_2505.08080, title={ Beyond Input Activations: Identifying Influential Latents by Gradient Sparse Autoencoders }, author={ Dong Shu and Xuansheng Wu and Haiyan Zhao and Mengnan Du and Ninghao Liu }, journal={arXiv preprint arXiv:2505.08080}, year={ 2025 } }