ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18736
722
0

Rethinking Direct Preference Optimization in Diffusion Models

24 May 2025
Junyong Kang
Seohyun Lim
Kyungjune Baek
Hyunjung Shim
ArXivPDFHTML
Abstract

Aligning text-to-image (T2I) diffusion models with human preferences has emerged as a critical research challenge. While recent advances in this area have extended preference optimization techniques from large language models (LLMs) to the diffusion setting, they often struggle with limited exploration. In this work, we propose a novel and orthogonal approach to enhancing diffusion-based preference optimization. First, we introduce a stable reference model update strategy that relaxes the frozen reference model, encouraging exploration while maintaining a stable optimization anchor through reference model regularization. Second, we present a timestep-aware training strategy that mitigates the reward scale imbalance problem across timesteps. Our method can be integrated into various preference optimization algorithms. Experimental results show that our approach improves the performance of state-of-the-art methods on human preference evaluation benchmarks.

View on arXiv
@article{kang2025_2505.18736,
  title={ Rethinking Direct Preference Optimization in Diffusion Models },
  author={ Junyong Kang and Seohyun Lim and Kyungjune Baek and Hyunjung Shim },
  journal={arXiv preprint arXiv:2505.18736},
  year={ 2025 }
}
Comments on this paper