SOP-Bench: Complex Industrial SOPs for Evaluating LLM Agents
- LLMAG
Large Language Models (LLMs) demonstrate impressive general-purpose reasoning and problem-solving abilities. However, they struggle with executing complex, long-horizon workflows that demand strict adherence to Standard Operating Procedures (SOPs), a critical requirement for real-world industrial automation. Despite this need, there is a lack of public benchmarks that reflect the complexity, structure, and domain-specific nuances of SOPs. To address this, we present three main contributions. First, we introduce a synthetic data generation framework to create realistic, industry-grade SOPs that rigorously test the planning, reasoning, and tool-use capabilities of LLM-based agents. Second, using this framework, we develop SOP-Bench, a benchmark of over 1,800 tasks across 10 industrial domains, each with APIs, tool interfaces, and human-validated test cases. Third, we evaluate two prominent agent architectures: Function-Calling and ReAct Agents, on SOP-Bench, observing average success rates of only 27% and 48%, respectively. Remarkably, when the tool registry is much larger than necessary, agents invoke incorrect tools nearly 100% of the time. These findings underscore a substantial gap between current agentic capabilities of LLMs and the demands of automating real-world SOPs. Performance varies significantly by task and domain, highlighting the need for domain-specific benchmarking and architectural choices before deployment. SOP-Bench is publicly available atthis http URL. We also release the prompts underpinning the data generation framework to support new domain-specific SOP benchmarks. We invite the community to extend SOP-Bench with SOPs from their industrial domains.
View on arXiv@article{nandi2025_2506.08119, title={ SOP-Bench: Complex Industrial SOPs for Evaluating LLM Agents }, author={ Subhrangshu Nandi and Arghya Datta and Nikhil Vichare and Indranil Bhattacharya and Huzefa Raja and Jing Xu and Shayan Ray and Giuseppe Carenini and Abhi Srivastava and Aaron Chan and Man Ho Woo and Amar Kandola and Brandon Theresa and Francesco Carbone }, journal={arXiv preprint arXiv:2506.08119}, year={ 2025 } }