11
0

FastInit: Fast Noise Initialization for Temporally Consistent Video Generation

Main:9 Pages
3 Figures
Bibliography:3 Pages
6 Tables
Appendix:3 Pages
Abstract

Video generation has made significant strides with the development of diffusion models; however, achieving high temporal consistency remains a challenging task. Recently, FreeInit identified a training-inference gap and introduced a method to iteratively refine the initial noise during inference. However, iterative refinement significantly increases the computational cost associated with video generation. In this paper, we introduce FastInit, a fast noise initialization method that eliminates the need for iterative refinement. FastInit learns a Video Noise Prediction Network (VNPNet) that takes random noise and a text prompt as input, generating refined noise in a single forward pass. Therefore, FastInit greatly enhances the efficiency of video generation while achieving high temporal consistency across frames. To train the VNPNet, we create a large-scale dataset consisting of pairs of text prompts, random noise, and refined noise. Extensive experiments with various text-to-video models show that our method consistently improves the quality and temporal consistency of the generated videos. FastInit not only provides a substantial improvement in video generation but also offers a practical solution that can be applied directly during inference. The code and dataset will be released.

View on arXiv
@article{bai2025_2506.16119,
  title={ FastInit: Fast Noise Initialization for Temporally Consistent Video Generation },
  author={ Chengyu Bai and Yuming Li and Zhongyu Zhao and Jintao Chen and Peidong Jia and Qi She and Ming Lu and Shanghang Zhang },
  journal={arXiv preprint arXiv:2506.16119},
  year={ 2025 }
}
Comments on this paper