Diffusion models are state-of-the-art generative models in various domains, yet their samples often fail to satisfy downstream objectives such as safety constraints or domain-specific validity. Existing techniques for alignment require gradients, internal model access, or large computational budgets. We introduce an inference-time alignment framework based on evolutionary algorithms. We treat diffusion models as black-boxes and search their latent space to maximize alignment objectives. Our method enables efficient inference-time alignment for both differentiable and non-differentiable alignment objectives across a range of diffusion models. On the DrawBench and Open Image Preferences benchmark, our EA methods outperform state-of-the-art gradient-based and gradient-free inference-time methods. In terms of memory consumption, we require 55% to 76% lower GPU memory than gradient-based methods. In terms of running-time, we are 72% to 80% faster than gradient-based methods. We achieve higher alignment scores over 50 optimization steps on Open Image Preferences than gradient-based and gradient-free methods.
View on arXiv@article{jajal2025_2506.00299, title={ Inference-Time Alignment of Diffusion Models with Evolutionary Algorithms }, author={ Purvish Jajal and Nick John Eliopoulos and Benjamin Shiue-Hal Chou and George K. Thiruvathukal and James C. Davis and Yung-Hsiang Lu }, journal={arXiv preprint arXiv:2506.00299}, year={ 2025 } }