ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14530
23
0

Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs

20 May 2025
Zhipeng Yang
Junzhuo Li
Siyu Xia
Xuming Hu
    AIFin
    LRM
ArXivPDFHTML
Abstract

We show that large language models (LLMs) exhibit an internal chain-of-thought\textit{internal chain-of-thought}internal chain-of-thought: they sequentially decompose and execute composite tasks layer-by-layer. Two claims ground our study: (i) distinct subtasks are learned at different network depths, and (ii) these subtasks are executed sequentially across layers. On a benchmark of 15 two-step composite tasks, we employ layer-from context-masking and propose a novel cross-task patching method, confirming (i). To examine claim (ii), we apply LogitLens to decode hidden states, revealing a consistent layerwise execution pattern. We further replicate our analysis on the real-world TRACE\text{TRACE}TRACE benchmark, observing the same stepwise dynamics. Together, our results enhance LLMs transparency by showing their capacity to internally plan and execute subtasks (or instructions), opening avenues for fine-grained, instruction-level activation steering.

View on arXiv
@article{yang2025_2505.14530,
  title={ Internal Chain-of-Thought: Empirical Evidence for Layer-wise Subtask Scheduling in LLMs },
  author={ Zhipeng Yang and Junzhuo Li and Siyu Xia and Xuming Hu },
  journal={arXiv preprint arXiv:2505.14530},
  year={ 2025 }
}
Comments on this paper