45
0

CoCo-Bench: A Comprehensive Code Benchmark For Multi-task Large Language Model Evaluation

Abstract

Large language models (LLMs) play a crucial role in software engineering, excelling in tasks like code generation and maintenance. However, existing benchmarks are often narrow in scope, focusing on a specific task and lack a comprehensive evaluation framework that reflects real-world applications. To address these gaps, we introduce CoCo-Bench (Comprehensive Code Benchmark), designed to evaluate LLMs across four critical dimensions: code understanding, code generation, code modification, and code review. These dimensions capture essential developer needs, ensuring a more systematic and representative evaluation. CoCo-Bench includes multiple programming languages and varying task difficulties, with rigorous manual review to ensure data quality and accuracy. Empirical results show that CoCo-Bench aligns with existing benchmarks while uncovering significant variations in model performance, effectively highlighting strengths and weaknesses. By offering a holistic and objective evaluation, CoCo-Bench provides valuable insights to guide future research and technological advancements in code-oriented LLMs, establishing a reliable benchmark for the field.

View on arXiv
@article{yin2025_2504.20673,
  title={ CoCo-Bench: A Comprehensive Code Benchmark For Multi-task Large Language Model Evaluation },
  author={ Wenjing Yin and Tianze Sun and Yijiong Yu and Jiawei Fang and Guangyao Su and Jiancheng Wang and Zekun Wang and Wei Wang and Ran Chen and Ziyun Dai and Shuai Yuan and Menghang Dong and Peng Luo and Dong Cao and Da Lei and Yajun Zhang and Hao Chen and Xiang Ma and Yong Liu and Weifeng Liu and Yuanjian Xu and Ji Pei },
  journal={arXiv preprint arXiv:2504.20673},
  year={ 2025 }
}
Comments on this paper