ToolHaystack: Stress-Testing Tool-Augmented Language Models in Realistic Long-Term Interactions
- LLMAGRALM

Large language models (LLMs) have demonstrated strong capabilities in using external tools to address user inquiries. However, most existing evaluations assume tool use in short contexts, offering limited insight into model behavior during realistic long-term interactions. To fill this gap, we introduce ToolHaystack, a benchmark for testing the tool use capabilities in long-term interactions. Each test instance in ToolHaystack includes multiple tasks execution contexts and realistic noise within a continuous conversation, enabling assessment of how well models maintain context and handle various disruptions. By applying this benchmark to 14 state-of-the-art LLMs, we find that while current models perform well in standard multi-turn settings, they often significantly struggle in ToolHaystack, highlighting critical gaps in their long-term robustness not revealed by previous tool benchmarks.
View on arXiv@article{kwak2025_2505.23662, title={ ToolHaystack: Stress-Testing Tool-Augmented Language Models in Realistic Long-Term Interactions }, author={ Beong-woo Kwak and Minju Kim and Dongha Lim and Hyungjoo Chae and Dongjin Kang and Sunghwan Kim and Dongil Yang and Jinyoung Yeo }, journal={arXiv preprint arXiv:2505.23662}, year={ 2025 } }