2
0

Beyond Single-Point Judgment: Distribution Alignment for LLM-as-a-Judge

Abstract

LLMs have emerged as powerful evaluators in the LLM-as-a-Judge paradigm, offering significant efficiency and flexibility compared to human judgments. However, previous methods primarily rely on single-point evaluations, overlooking the inherent diversity and uncertainty in human evaluations. This approach leads to information loss and decreases the reliability of evaluations. To address this limitation, we propose a novel training framework that explicitly aligns the LLM-generated judgment distribution with empirical human distributions. Specifically, we propose a distributional alignment objective based on KL divergence, combined with an auxiliary cross-entropy regularization to stabilize the training process. Furthermore, considering that empirical distributions may derive from limited human annotations, we incorporate adversarial training to enhance model robustness against distribution perturbations. Extensive experiments across various LLM backbones and evaluation tasks demonstrate that our framework significantly outperforms existing closed-source LLMs and conventional single-point alignment methods, with improved alignment quality, evaluation accuracy, and robustness.

View on arXiv
@article{chen2025_2505.12301,
  title={ Beyond Single-Point Judgment: Distribution Alignment for LLM-as-a-Judge },
  author={ Luyu Chen and Zeyu Zhang and Haoran Tan and Quanyu Dai and Hao Yang and Zhenhua Dong and Xu Chen },
  journal={arXiv preprint arXiv:2505.12301},
  year={ 2025 }
}
Comments on this paper