ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13977
17
0

CRITICTOOL: Evaluating Self-Critique Capabilities of Large Language Models in Tool-Calling Error Scenarios

11 June 2025
Shiting Huang
Zhen Fang
Zehui Chen
Siyu Yuan
Junjie Ye
Y. Zeng
Lin Yen-Chen
Qi Mao
Feng Zhao
    LLMAGKELM
ArXiv (abs)PDFHTML
Main:8 Pages
28 Figures
Bibliography:3 Pages
8 Tables
Appendix:21 Pages
Abstract

The ability of large language models (LLMs) to utilize external tools has enabled them to tackle an increasingly diverse range of tasks. However, as the tasks become more complex and long-horizon, the intricate tool utilization process may trigger various unexpected errors. Therefore, how to effectively handle such errors, including identifying, diagnosing, and recovering from them, has emerged as a key research direction for advancing tool learning. In this work, we first extensively analyze the types of errors encountered during the function-calling process on several competitive tool evaluation benchmarks. Based on it, we introduce CRITICTOOL, a comprehensive critique evaluation benchmark specialized for tool learning. Building upon a novel evolutionary strategy for dataset construction, CRITICTOOL holds diverse tool-use errors with varying complexities, which better reflects real-world scenarios. We conduct extensive experiments on CRITICTOOL, and validate the generalization and effectiveness of our constructed benchmark strategy. We also provide an in-depth analysis of the tool reflection ability on various LLMs, offering a new perspective on the field of tool learning in LLMs. The code is available at \href{this https URL}{this https URL}.

View on arXiv
@article{huang2025_2506.13977,
  title={ CRITICTOOL: Evaluating Self-Critique Capabilities of Large Language Models in Tool-Calling Error Scenarios },
  author={ Shiting Huang and Zhen Fang and Zehui Chen and Siyu Yuan and Junjie Ye and Yu Zeng and Lin Chen and Qi Mao and Feng Zhao },
  journal={arXiv preprint arXiv:2506.13977},
  year={ 2025 }
}
Comments on this paper