ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06364
48
0

Generative Video Bi-flow

9 March 2025
Chen Liu
Tobias Ritschel
    DiffM
    VGen
ArXivPDFHTML
Abstract

We propose a novel generative video model by robustly learning temporal change as a neural Ordinary Differential Equation (ODE) flow with a bilinear objective of combining two aspects: The first is to map from the past into future video frames directly. Previous work has mapped the noise to new frames, a more computationally expensive process. Unfortunately, starting from the previous frame, instead of noise, is more prone to drifting errors. Hence, second, we additionally learn how to remove the accumulated errors as the joint objective by adding noise during training. We demonstrate unconditional video generation in a streaming manner for various video datasets, all at competitive quality compared to a baseline conditional diffusion but with higher speed, i.e., fewer ODE solver steps.

View on arXiv
@article{liu2025_2503.06364,
  title={ Generative Video Bi-flow },
  author={ Chen Liu and Tobias Ritschel },
  journal={arXiv preprint arXiv:2503.06364},
  year={ 2025 }
}
Comments on this paper