ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18235
35
0

The Origins of Representation Manifolds in Large Language Models

23 May 2025
Alexander Modell
Patrick Rubin-Delanchy
N. Whiteley
    MILMAI4CE
ArXiv (abs)PDFHTML
Main:8 Pages
6 Figures
Bibliography:5 Pages
Appendix:3 Pages
Abstract

There is a large ongoing scientific effort in mechanistic interpretability to map embeddings and internal representations of AI systems into human-understandable concepts. A key element of this effort is the linear representation hypothesis, which posits that neural representations are sparse linear combinations of `almost-orthogonal' direction vectors, reflecting the presence or absence of different features. This model underpins the use of sparse autoencoders to recover features from representations. Moving towards a fuller model of features, in which neural representations could encode not just the presence but also a potentially continuous and multidimensional value for a feature, has been a subject of intense recent discourse. We describe why and how a feature might be represented as a manifold, demonstrating in particular that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths, potentially answering the question of how distance in representation space and relatedness in concept space could be connected. The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.

View on arXiv
@article{modell2025_2505.18235,
  title={ The Origins of Representation Manifolds in Large Language Models },
  author={ Alexander Modell and Patrick Rubin-Delanchy and Nick Whiteley },
  journal={arXiv preprint arXiv:2505.18235},
  year={ 2025 }
}
Comments on this paper