ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.12961
2
5
v1v2 (latest)

Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger

18 February 2025
Wenjun Li
Dexun Li
Kuicai Dong
Cong Zhang
Hao Zhang
Weiwen Liu
Yasheng Wang
Ruiming Tang
Yong Liu
    LLMAGKELM
ArXiv (abs)PDFHTML
Main:9 Pages
10 Figures
Bibliography:3 Pages
10 Tables
Appendix:13 Pages
Abstract

Large language models (LLMs) have shown remarkable emergent capabilities, transforming the execution of functional tasks by leveraging external tools for complex problems that require specialized processing or up-to-date data. While existing research expands LLMs access to diverse tools (e.g., program interpreters, search engines, calculators), the necessity of using these tools is often overlooked, leading to indiscriminate tool invocation. This naive approach raises two key issues: increased latency due to unnecessary tool calls, and potential errors resulting from faulty interactions with external tools. In this paper, we introduce meta-cognition as a proxy for LLMs self-assessment of their capabilities, reflecting the model's awareness of its own limitations. Based on this, we propose MeCo, an adaptive decision-making strategy for external tool use. MeCo quantifies metacognitive scores by capturing high-level cognitive signals in the representation space, guiding when to invoke tools. Notably, MeCo is fine-tuning-free and incurs minimal cost. Experiments across multiple backbone models and benchmarks show that MeCo reliably detects LLMs' internal cognitive signals and significantly improves tool-use decision-making.

View on arXiv
@article{li2025_2502.12961,
  title={ Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger },
  author={ Wenjun Li and Dexun Li and Kuicai Dong and Cong Zhang and Hao Zhang and Weiwen Liu and Yasheng Wang and Ruiming Tang and Yong Liu },
  journal={arXiv preprint arXiv:2502.12961},
  year={ 2025 }
}
Comments on this paper