ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13176
7
0

ToolSpectrum : Towards Personalized Tool Utilization for Large Language Models

19 May 2025
Zihao Cheng
Hongru Wang
Zeming Liu
Yuhang Guo
Yuanfang Guo
Yunhong Wang
Haifeng Wang
ArXivPDFHTML
Abstract

While integrating external tools into large language models (LLMs) enhances their ability to access real-time information and domain-specific services, existing approaches focus narrowly on functional tool selection following user instructions, overlooking the context-aware personalization in tool selection. This oversight leads to suboptimal user satisfaction and inefficient tool utilization, particularly when overlapping toolsets require nuanced selection based on contextual factors. To bridge this gap, we introduce ToolSpectrum, a benchmark designed to evaluate LLMs' capabilities in personalized tool utilization. Specifically, we formalize two key dimensions of personalization, user profile and environmental factors, and analyze their individual and synergistic impacts on tool utilization. Through extensive experiments on ToolSpectrum, we demonstrate that personalized tool utilization significantly improves user experience across diverse scenarios. However, even state-of-the-art LLMs exhibit the limited ability to reason jointly about user profiles and environmental factors, often prioritizing one dimension at the expense of the other. Our findings underscore the necessity of context-aware personalization in tool-augmented LLMs and reveal critical limitations for current models. Our data and code are available atthis https URL.

View on arXiv
@article{cheng2025_2505.13176,
  title={ ToolSpectrum : Towards Personalized Tool Utilization for Large Language Models },
  author={ Zihao Cheng and Hongru Wang and Zeming Liu and Yuhang Guo and Yuanfang Guo and Yunhong Wang and Haifeng Wang },
  journal={arXiv preprint arXiv:2505.13176},
  year={ 2025 }
}
Comments on this paper