ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06008
45
0

Token Signature: Predicting Chain-of-Thought Gains with Token Decoding Feature in Large Language Models

6 June 2025
Peijie Liu
Fengli Xu
Yong Li
    LRM
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:3 Pages
14 Tables
Appendix:9 Pages
Abstract

Chain-of-Thought (CoT) technique has proven effective in improving the performance of large language models (LLMs) on complex reasoning tasks. However, the performance gains are inconsistent across different tasks, and the underlying mechanism remains a long-standing research question. In this work, we make a preliminary observation that the monotonicity of token probability distributions may be correlated with the gains achieved through CoT reasoning. Leveraging this insight, we propose two indicators based on the token probability distribution to assess CoT effectiveness across different tasks. By combining instance-level indicators with logistic regression model, we introduce Dynamic CoT, a method that dynamically select between CoT and direct answer. Furthermore, we extend Dynamic CoT to closed-source models by transferring decision strategies learned from open-source models. Our indicators for assessing CoT effectiveness achieve an accuracy of 89.2\%, and Dynamic CoT reduces token consumption by more than 35\% while maintaining high accuracy. Overall, our work offers a novel perspective on the underlying mechanisms of CoT reasoning and provides a framework for its more efficient deployment.

View on arXiv
@article{liu2025_2506.06008,
  title={ Token Signature: Predicting Chain-of-Thought Gains with Token Decoding Feature in Large Language Models },
  author={ Peijie Liu and Fengli Xu and Yong Li },
  journal={arXiv preprint arXiv:2506.06008},
  year={ 2025 }
}
Comments on this paper