PerPO: Perceptual Preference Optimization via Discriminative Rewarding

This paper presents Perceptual Preference Optimization (PerPO), a perception alignment method aimed at addressing the visual discrimination challenges in generative pre-trained multimodal large language models (MLLMs). To align MLLMs with human visual perception process, PerPO employs discriminative rewarding to gather diverse negative samples, followed by listwise preference optimization to rankthis http URLutilizing the reward as a quantitative margin for ranking, our method effectively bridges generative preference optimization and discriminative empirical risk minimization. PerPO significantly enhances MLLMs' visual discrimination capabilities while maintaining their generative strengths, mitigates image-unconditional reward hacking, and ensures consistent performance across visual tasks. This work marks a crucial step towards more perceptually aligned and versatile MLLMs. We also hope that PerPO will encourage the community to rethink MLLM alignment strategies.
View on arXiv@article{zhu2025_2502.04371, title={ PerPO: Perceptual Preference Optimization via Discriminative Rewarding }, author={ Zining Zhu and Liang Zhao and Kangheng Lin and Jinze Yang and En Yu and Chenglong Liu and Haoran Wei and Jianjian Sun and Zheng Ge and Xiangyu Zhang }, journal={arXiv preprint arXiv:2502.04371}, year={ 2025 } }