ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.12918
20
2

N2G: A Scalable Approach for Quantifying Interpretable Neuron Representations in Large Language Models

22 April 2023
Alex Foote
Neel Nanda
Esben Kran
Ionnis Konstas
Fazl Barez
    MILM
ArXivPDFHTML
Abstract

Understanding the function of individual neurons within language models is essential for mechanistic interpretability research. We propose Neuron to Graph (N2G)\textbf{Neuron to Graph (N2G)}Neuron to Graph (N2G), a tool which takes a neuron and its dataset examples, and automatically distills the neuron's behaviour on those examples to an interpretable graph. This presents a less labour intensive approach to interpreting neurons than current manual methods, that will better scale these methods to Large Language Models (LLMs). We use truncation and saliency methods to only present the important tokens, and augment the dataset examples with more diverse samples to better capture the extent of neuron behaviour. These graphs can be visualised to aid manual interpretation by researchers, but can also output token activations on text to compare to the neuron's ground truth activations for automatic validation. N2G represents a step towards scalable interpretability methods by allowing us to convert neurons in an LLM to interpretable representations of measurable quality.

View on arXiv
Comments on this paper