StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction Following

Multi-turn instruction following capability constitutes a core competency of large language models (LLMs) in real-world applications. Existing evaluation benchmarks predominantly focus on fine-grained constraint satisfaction and domain-specific capability assessment, yet overlook the crucial structural dependencies between dialogue turns that distinguish multi-turn from single-turn interactions. These structural dependencies not only reflect user intent but also establish an essential second dimension for the instruction following evaluation beyond constraint satisfaction. To address this gap, we propose StructFlowBench, a multi-turn instruction following benchmark with structural flow modeling. The benchmark defines an innovative structural flow framework with six fundamental inter-turn relationships. These relationships introduce novel structural constraints for model evaluation and also serve as generation parameters for creating customized dialogue flows tailored to specific scenarios. Adopting established LLM-based automatic evaluation methodologies, we conduct systematic evaluations of 13 leading open-source and closed-source LLMs. Experimental results reveal significant deficiencies in current models' comprehension of multi-turn dialogue structures. The code is available atthis https URL.
View on arXiv@article{li2025_2502.14494, title={ StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction Following }, author={ Jinnan Li and Jinzhe Li and Yue Wang and Yi Chang and Yuan Wu }, journal={arXiv preprint arXiv:2502.14494}, year={ 2025 } }