79
2

Automated Creativity Evaluation for Large Language Models: A Reference-Based Approach

Abstract

Creative writing is a key capability of Large Language Models (LLMs), with potential applications in literature, storytelling, and various creative domains. However, evaluating the creativity of machine-generated texts remains a significant challenge, as existing methods either rely on costly manual annotations or fail to align closely with human assessments. In this paper, we propose an effective automated evaluation method based on the Torrance Test of Creative Writing (TTCW), which evaluates creativity as product. Our method employs a reference-based Likert-style approach, scoring generated creative texts relative to high-quality reference texts across various tests. Experimental results demonstrate that our method significantly improves the alignment between LLM evaluations and human assessments, achieving a pairwise accuracy of 0.75 (+15\%).

View on arXiv
@article{li2025_2504.15784,
  title={ Automated Creativity Evaluation for Large Language Models: A Reference-Based Approach },
  author={ Ruizhe Li and Chiwei Zhu and Benfeng Xu and Xiaorui Wang and Zhendong Mao },
  journal={arXiv preprint arXiv:2504.15784},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.