Rethinking Diverse Human Preference Learning through Principal Component Analysis

Understanding human preferences is crucial for improving foundation models and building personalized AI systems. However, preferences are inherently diverse and complex, making it difficult for traditional reward models to capture their full range. While fine-grained preference data can help, collecting it is expensive and hard to scale. In this paper, we introduce Decomposed Reward Models (DRMs), a novel approach that extracts diverse human preferences from binary comparisons without requiring fine-grained annotations. Our key insight is to represent human preferences as vectors and analyze them using Principal Component Analysis (PCA). By constructing a dataset of embedding differences between preferred and rejected responses, DRMs identify orthogonal basis vectors that capture distinct aspects of preference. These decomposed rewards can be flexibly combined to align with different user needs, offering an interpretable and scalable alternative to traditional reward models. We demonstrate that DRMs effectively extract meaningful preference dimensions (e.g., helpfulness, safety, humor) and adapt to new users without additional training. Our results highlight DRMs as a powerful framework for personalized and interpretable LLM alignment. Our code is available atthis https URL.
View on arXiv@article{luo2025_2502.13131, title={ Rethinking Diverse Human Preference Learning through Principal Component Analysis }, author={ Feng Luo and Rui Yang and Hao Sun and Chunyuan Deng and Jiarui Yao and Jingyan Shen and Huan Zhang and Hanjie Chen }, journal={arXiv preprint arXiv:2502.13131}, year={ 2025 } }