Step-level Reward for Free in RL-based T2I Diffusion Model Fine-tuning
- EGVM

Recent advances in text-to-image (T2I) diffusion model fine-tuning leverage reinforcement learning (RL) to align generated images with learnable reward functions. The existing approaches reformulate denoising as a Markov decision process for RL-driven optimization. However, they suffer from reward sparsity, receiving only a single delayed reward per generated trajectory. This flaw hinders precise step-level attribution of denoising actions, undermines training efficiency. To address this, we propose a simple yet effective credit assignment framework that dynamically distributes dense rewards across denoising steps. Specifically, we track changes in cosine similarity between intermediate and final images to quantify each step's contribution on progressively reducing the distance to the final image. Our approach avoids additional auxiliary neural networks for step-level preference modeling and instead uses reward shaping to highlight denoising phases that have a greater impact on image quality. Our method achieves 1.25 to 2 times higher sample efficiency and better generalization across four human preference reward functions, without compromising the original optimal policy.
View on arXiv@article{liao2025_2505.19196, title={ Step-level Reward for Free in RL-based T2I Diffusion Model Fine-tuning }, author={ Xinyao Liao and Wei Wei and Xiaoye Qu and Yu Cheng }, journal={arXiv preprint arXiv:2505.19196}, year={ 2025 } }