ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.06020
118
2

VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs

12 April 2023
Moayed Haji-Ali
Andrew Bond
Tolga Birdal
Duygu Ceylan
Levent Karacan
Erkut Erdem
Aykut Erdem
    VGen
    DiffM
ArXivPDFHTML
Abstract

We propose VidStyleODE\textbf{VidStyleODE}VidStyleODE, a spatiotemporally continuous disentangled Vid\textbf{Vid}Video representation based upon Style\textbf{Style}StyleGAN and Neural-ODE\textbf{ODE}ODEs. Effective traversal of the latent space learned by Generative Adversarial Networks (GANs) has been the basis for recent breakthroughs in image editing. However, the applicability of such advancements to the video domain has been hindered by the difficulty of representing and controlling videos in the latent space of GANs. In particular, videos are composed of content (i.e., appearance) and complex motion components that require a special mechanism to disentangle and control. To achieve this, VidStyleODE encodes the video content in a pre-trained StyleGAN W+\mathcal{W}_+W+​ space and benefits from a latent ODE component to summarize the spatiotemporal dynamics of the input video. Our novel continuous video generation process then combines the two to generate high-quality and temporally consistent videos with varying frame rates. We show that our proposed method enables a variety of applications on real videos: text-guided appearance manipulation, motion manipulation, image animation, and video interpolation and extrapolation. Project website:this https URL

View on arXiv
@article{ali2025_2304.06020,
  title={ VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs },
  author={ Moayed Haji Ali and Andrew Bond and Tolga Birdal and Duygu Ceylan and Levent Karacan and Erkut Erdem and Aykut Erdem },
  journal={arXiv preprint arXiv:2304.06020},
  year={ 2025 }
}
Comments on this paper