CodeFlowBench: A Multi-turn, Iterative Benchmark for Complex Code Generation

Modern software development demands code that is maintainable, testable, and scalable by organizing the implementation into modular components with iterative reuse of existing codes. We formalize this iterative, multi-turn paradigm as codeflow and introduce CodeFlowBench, the first benchmark designed to comprehensively evaluate LLMs' ability to perform codeflow, namely implementing new functionality by reusing existing functions over multiple turns. CodeFlowBench comprises 5,258 problems from Codeforces and is continuously updated via an automated pipeline, which decomposes each problem into subproblems with unit tests based on dependency tree analysis and dataflow analysis. We further propose a novel evaluation framework featured dual assessment protocol and structural metrics derived from dependency trees. Extensive experiments on 16 popular LLMs reveal significant performance degradation in multi-turn scenarios. For instance, o1-mini retains only 20.8% Pass@1 in multi-turn scenario versus 37.8% in single-turn scenario. More fine-grained analysis illustrates that model performance inversely correlates with dependency complexity. These findings not only highlight the critical challenges for supporting real-world workflows, but also establish CodeFlowBench as an essential tool for advancing code generation research.
View on arXiv@article{wang2025_2504.21751, title={ CodeFlowBench: A Multi-turn, Iterative Benchmark for Complex Code Generation }, author={ Sizhe Wang and Zhengren Wang and Dongsheng Ma and Yongan Yu and Rui Ling and Zhiyu Li and Feiyu Xiong and Wentao Zhang }, journal={arXiv preprint arXiv:2504.21751}, year={ 2025 } }