10
0

Diffusion Blend: Inference-Time Multi-Preference Alignment for Diffusion Models

Abstract

Reinforcement learning (RL) algorithms have been used recently to align diffusion models with downstream objectives such as aesthetic quality and text-image consistency by fine-tuning them to maximize a single reward function under a fixed KL regularization. However, this approach is inherently restrictive in practice, where alignment must balance multiple, often conflicting objectives. Moreover, user preferences vary across prompts, individuals, and deployment contexts, with varying tolerances for deviation from a pre-trained base model. We address the problem of inference-time multi-preference alignment: given a set of basis reward functions and a reference KL regularization strength, can we design a fine-tuning procedure so that, at inference time, it can generate images aligned with any user-specified linear combination of rewards and regularization, without requiring additional fine-tuning? We propose Diffusion Blend, a novel approach to solve inference-time multi-preference alignment by blending backward diffusion processes associated with fine-tuned models, and we instantiate this approach with two algorithms: DB-MPA for multi-reward alignment and DB-KLA for KL regularization control. Extensive experiments show that Diffusion Blend algorithms consistently outperform relevant baselines and closely match or exceed the performance of individually fine-tuned models, enabling efficient, user-driven alignment at inference-time. The code is available atthis https URL}{this http URL.

View on arXiv
@article{cheng2025_2505.18547,
  title={ Diffusion Blend: Inference-Time Multi-Preference Alignment for Diffusion Models },
  author={ Min Cheng and Fatemeh Doudi and Dileep Kalathil and Mohammad Ghavamzadeh and Panganamala R. Kumar },
  journal={arXiv preprint arXiv:2505.18547},
  year={ 2025 }
}
Comments on this paper