The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models
- MILM

Main:9 Pages
8 Figures
Bibliography:1 Pages
2 Tables
Appendix:5 Pages
Abstract
This paper studies the emergence of interpretable categorical features within large language models (LLMs), analyzing their behavior across training checkpoints (time), transformer layers (space), and varying model sizes (scale). Using sparse autoencoders for mechanistic interpretability, we identify when and where specific semantic concepts emerge within neural activations. Results indicate clear temporal and scale-specific thresholds for feature emergence across multiple domains. Notably, spatial analysis reveals unexpected semantic reactivation, with early-layer features re-emerging at later layers, challenging standard assumptions about representational dynamics in transformer models.
View on arXiv@article{sawmya2025_2505.19440, title={ The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models }, author={ Shashata Sawmya and Micah Adler and Nir Shavit }, journal={arXiv preprint arXiv:2505.19440}, year={ 2025 } }
Comments on this paper