ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01454
45
0

DiffuseSlide: Training-Free High Frame Rate Video Generation Diffusion

2 June 2025
Geunmin Hwang
Hyun-kyu Ko
Younghyun Kim
S. W. Lee
Eunbyung Park
    VGen
ArXiv (abs)PDFHTML
Main:8 Pages
10 Figures
Bibliography:2 Pages
4 Tables
Appendix:4 Pages
Abstract

Recent advancements in diffusion models have revolutionized video generation, enabling the creation of high-quality, temporally consistent videos. However, generating high frame-rate (FPS) videos remains a significant challenge due to issues such as flickering and degradation in long sequences, particularly in fast-motion scenarios. Existing methods often suffer from computational inefficiencies and limitations in maintaining video quality over extended frames. In this paper, we present a novel, training-free approach for high FPS video generation using pre-trained diffusion models. Our method, DiffuseSlide, introduces a new pipeline that leverages key frames from low FPS videos and applies innovative techniques, including noise re-injection and sliding window latent denoising, to achieve smooth, consistent video outputs without the need for additional fine-tuning. Through extensive experiments, we demonstrate that our approach significantly improves video quality, offering enhanced temporal coherence and spatial fidelity. The proposed method is not only computationally efficient but also adaptable to various video generation tasks, making it ideal for applications such as virtual reality, video games, and high-quality content creation.

View on arXiv
@article{hwang2025_2506.01454,
  title={ DiffuseSlide: Training-Free High Frame Rate Video Generation Diffusion },
  author={ Geunmin Hwang and Hyun-kyu Ko and Younghyun Kim and Seungryong Lee and Eunbyung Park },
  journal={arXiv preprint arXiv:2506.01454},
  year={ 2025 }
}
Comments on this paper