9
0

Flow2Code: Evaluating Large Language Models for Flowchart-based Code Generation Capability

Abstract

While large language models (LLMs) show promise in code generation, existing benchmarks neglect the flowchart-based code generation. To promote further research on flowchart-based code generation, this work presents Flow2Code, a novel benchmark for flowchart-based code generation evaluation. The evaluation dataset spans 15 programming languages and includes 5,622 code segments paired with 16,866 flowcharts of three types: code, UML, and pseudocode. Extensive experiments with 13 multimodal LLMs reveal that current LLMs can not generate code based on flowcharts perfectly. Besides, experiment results show that the supervised fine-tuning technique contributes greatly to the models' performance. We publicly release our code and datasets atthis https URL.

View on arXiv
@article{he2025_2506.02073,
  title={ Flow2Code: Evaluating Large Language Models for Flowchart-based Code Generation Capability },
  author={ Mengliang He and Jiayi Zeng and Yankai Jiang and Wei Zhang and Zeming Liu and Xiaoming Shi and Aimin Zhou },
  journal={arXiv preprint arXiv:2506.02073},
  year={ 2025 }
}
Comments on this paper