Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search
- EGVMVGen

The remarkable progress in text-to-video diffusion models enables photorealistic generations, although the contents of the generated video often include unnatural movement or deformation, reverse playback, and motionless scenes. Recently, an alignment problem has attracted huge attention, where we steer the output of diffusion models based on some quantity on the goodness of the content. Because there is a large room for improvement of perceptual quality along the frame direction, we should address which metrics we should optimize and how we can optimize them in the video generation. In this paper, we propose diffusion latent beam search with lookahead estimator, which can select a better diffusion latent to maximize a given alignment reward, at inference time. We then point out that the improvement of perceptual video quality considering the alignment to prompts requires reward calibration by weighting existing metrics. This is because when humans or vision language models evaluate outputs, many previous metrics to quantify the naturalness of video do not always correlate with evaluation. We demonstrate that our method improves the perceptual quality evaluated on the calibrated reward, VLMs, and human assessment, without model parameter update, and outputs the best generation compared to greedy search and best-of-N sampling under much more efficient computational cost. The experiments highlight that our method is beneficial to many capable generative models, and provide a practical guideline that we should prioritize the inference-time compute allocation into lookahead steps for reward estimation over search budget or denoising steps.
View on arXiv@article{oshima2025_2501.19252, title={ Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search }, author={ Yuta Oshima and Masahiro Suzuki and Yutaka Matsuo and Hiroki Furuta }, journal={arXiv preprint arXiv:2501.19252}, year={ 2025 } }