ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.01432
150
0

Emergent Stack Representations in Modeling Counter Languages Using Transformers

3 February 2025
Utkarsh Tiwari
Aviral Gupta
Michael Hahn
ArXivPDFHTML
Abstract

Transformer architectures are the backbone of most modern language models, but understanding the inner workings of these models still largely remains an open problem. One way that research in the past has tackled this problem is by isolating the learning capabilities of these architectures by training them over well-understood classes of formal languages. We extend this literature by analyzing models trained over counter languages, which can be modeled using counter variables. We train transformer models on 4 counter languages, and equivalently formulate these languages using stacks, whose depths can be understood as the counter values. We then probe their internal representations for stack depths at each input token to show that these models when trained as next token predictors learn stack-like representations. This brings us closer to understanding the algorithmic details of how transformers learn languages and helps in circuit discovery.

View on arXiv
@article{tiwari2025_2502.01432,
  title={ Emergent Stack Representations in Modeling Counter Languages Using Transformers },
  author={ Utkarsh Tiwari and Aviral Gupta and Michael Hahn },
  journal={arXiv preprint arXiv:2502.01432},
  year={ 2025 }
}
Comments on this paper