Think-J: Learning to Think for Generative LLM-as-a-Judge

LLM-as-a-Judge refers to the automatic modeling of preferences for responses generated by Large Language Models (LLMs), which is of significant importance for both LLM evaluation and reward modeling. Although generative LLMs have made substantial progress in various tasks, their performance as LLM-Judge still falls short of expectations. In this work, we propose Think-J, which improves generative LLM-as-a-Judge by learning how to think. We first utilized a small amount of curated data to develop the model with initial judgment thinking capabilities. Subsequently, we optimize the judgment thinking traces based on reinforcement learning (RL). We propose two methods for judgment thinking optimization, based on offline and online RL, respectively. The offline RL requires training a critic model to construct positive and negative examples for learning. The online method defines rule-based reward as feedback for optimization. Experimental results showed that our approach can significantly enhance the evaluation capability of generative LLM-Judge, surpassing both generative and classifier-based LLM-Judge without requiring extra human annotations.
View on arXiv@article{huang2025_2505.14268, title={ Think-J: Learning to Think for Generative LLM-as-a-Judge }, author={ Hui Huang and Yancheng He and Hongli Zhou and Rui Zhang and Wei Liu and Weixun Wang and Wenbo Su and Bo Zheng and Jiaheng Liu }, journal={arXiv preprint arXiv:2505.14268}, year={ 2025 } }