ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20332
65
2

Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models

27 February 2025
Yukang Yang
Declan Campbell
Kaixuan Huang
Mengdi Wang
Jonathan D. Cohen
Taylor W. Webb
    LRM
ArXivPDFHTML
Abstract

Many recent studies have found evidence for emergent reasoning capabilities in large language models, but debate persists concerning the robustness of these capabilities, and the extent to which they depend on structured reasoning mechanisms. To shed light on these issues, we perform a comprehensive study of the internal mechanisms that support abstract rule induction in an open-source language model (Llama3-70B). We identify an emergent symbolic architecture that implements abstract reasoning via a series of three computations. In early layers, symbol abstraction heads convert input tokens to abstract variables based on the relations between those tokens. In intermediate layers, symbolic induction heads perform sequence induction over these abstract variables. Finally, in later layers, retrieval heads predict the next token by retrieving the value associated with the predicted abstract variable. These results point toward a resolution of the longstanding debate between symbolic and neural network approaches, suggesting that emergent reasoning in neural networks depends on the emergence of symbolic mechanisms.

View on arXiv
@article{yang2025_2502.20332,
  title={ Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models },
  author={ Yukang Yang and Declan Campbell and Kaixuan Huang and Mengdi Wang and Jonathan Cohen and Taylor Webb },
  journal={arXiv preprint arXiv:2502.20332},
  year={ 2025 }
}
Comments on this paper