15
0

Simple and Effective Baselines for Code Summarisation Evaluation

Abstract

Code documentation is useful, but writing it is time-consuming. Different techniques for generating code summaries have emerged, but comparing them is difficult because human evaluation is expensive and automatic metrics are unreliable. In this paper, we introduce a simple new baseline in which we ask an LLM to give an overall score to a summary. Unlike n-gram and embedding-based baselines, our approach is able to consider the code when giving a score. This allows us to also make a variant that does not consider the reference summary at all, which could be used for other tasks, e.g., to evaluate the quality of documentation in code bases. We find that our method is as good or better than prior metrics, though we recommend using it in conjunction with embedding-based methods to avoid the risk of LLM-specific bias.

View on arXiv
@article{robinson2025_2505.19392,
  title={ Simple and Effective Baselines for Code Summarisation Evaluation },
  author={ Jade Robinson and Jonathan K. Kummerfeld },
  journal={arXiv preprint arXiv:2505.19392},
  year={ 2025 }
}
Comments on this paper