ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00041
23
0

Decoding Dense Embeddings: Sparse Autoencoders for Interpreting and Discretizing Dense Retrieval

28 May 2025
Seongwan Park
Taeklim Kim
Youngjoong Ko
ArXiv (abs)PDFHTML
Main:7 Pages
9 Figures
Bibliography:3 Pages
11 Tables
Appendix:6 Pages
Abstract

Despite their strong performance, Dense Passage Retrieval (DPR) models suffer from a lack of interpretability. In this work, we propose a novel interpretability framework that leverages Sparse Autoencoders (SAEs) to decompose previously uninterpretable dense embeddings from DPR models into distinct, interpretable latent concepts. We generate natural language descriptions for each latent concept, enabling human interpretations of both the dense embeddings and the query-document similarity scores of DPR models. We further introduce Concept-Level Sparse Retrieval (CL-SR), a retrieval framework that directly utilizes the extracted latent concepts as indexing units. CL-SR effectively combines the semantic expressiveness of dense embeddings with the transparency and efficiency of sparse representations. We show that CL-SR achieves high index-space and computational efficiency while maintaining robust performance across vocabulary and semantic mismatches.

View on arXiv
@article{park2025_2506.00041,
  title={ Decoding Dense Embeddings: Sparse Autoencoders for Interpreting and Discretizing Dense Retrieval },
  author={ Seongwan Park and Taeklim Kim and Youngjoong Ko },
  journal={arXiv preprint arXiv:2506.00041},
  year={ 2025 }
}
Comments on this paper