10
0

DOVE: Efficient One-Step Diffusion Model for Real-World Video Super-Resolution

Abstract

Diffusion models have demonstrated promising performance in real-world video super-resolution (VSR). However, the dozens of sampling steps they require, make inference extremely slow. Sampling acceleration techniques, particularly single-step, provide a potential solution. Nonetheless, achieving one step in VSR remains challenging, due to the high training overhead on video data and stringent fidelity demands. To tackle the above issues, we propose DOVE, an efficient one-step diffusion model for real-world VSR. DOVE is obtained by fine-tuning a pretrained video diffusion model (*i.e.*, CogVideoX). To effectively train DOVE, we introduce the latent-pixel training strategy. The strategy employs a two-stage scheme to gradually adapt the model to the video super-resolution task. Meanwhile, we design a video processing pipeline to construct a high-quality dataset tailored for VSR, termed HQ-VSR. Fine-tuning on this dataset further enhances the restoration capability of DOVE. Extensive experiments show that DOVE exhibits comparable or superior performance to multi-step diffusion-based VSR methods. It also offers outstanding inference efficiency, achieving up to a **28×\times** speed-up over existing methods such as MGLD-VSR. Code is available at:this https URL.

View on arXiv
@article{chen2025_2505.16239,
  title={ DOVE: Efficient One-Step Diffusion Model for Real-World Video Super-Resolution },
  author={ Zheng Chen and Zichen Zou and Kewei Zhang and Xiongfei Su and Xin Yuan and Yong Guo and Yulun Zhang },
  journal={arXiv preprint arXiv:2505.16239},
  year={ 2025 }
}
Comments on this paper