44
2

Can Tool-augmented Large Language Models be Aware of Incomplete Conditions?

Abstract

Recent advancements in integrating large language models (LLMs) with tools have allowed the models to interact with real-world environments. However, these tool-augmented LLMs often encounter incomplete scenarios when users provide partial information or the necessary tools are unavailable. Recognizing and managing such scenarios is crucial for LLMs to ensure their reliability, but this exploration remains understudied. This study examines whether LLMs can identify incomplete conditions and appropriately determine when to refrain from using tools. To this end, we address a dataset by manipulating instances from two datasets by removing necessary tools or essential information for tool invocation. Our experiments show that LLMs often struggle to identify the absence of information required to utilize specific tools and recognize the absence of appropriate tools. We further analyze model behaviors in different environments and compare their performance against humans. Our research can contribute to advancing reliable LLMs by addressing common scenarios during interactions between humans and LLMs. Our code and dataset will be publicly available.

View on arXiv
@article{yang2025_2406.12307,
  title={ Can Tool-augmented Large Language Models be Aware of Incomplete Conditions? },
  author={ Seungbin Yang and ChaeHun Park and Taehee Kim and Jaegul Choo },
  journal={arXiv preprint arXiv:2406.12307},
  year={ 2025 }
}
Comments on this paper