ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.11303
  4. Cited By
Understanding Object Dynamics for Interactive Image-to-Video Synthesis

Understanding Object Dynamics for Interactive Image-to-Video Synthesis

21 June 2021
A. Blattmann
Timo Milbich
Michael Dorkenwald
Bjorn Ommer
    DiffM
    VGen
ArXivPDFHTML

Papers citing "Understanding Object Dynamics for Interactive Image-to-Video Synthesis"

29 / 29 papers shown
Title
Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency
Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency
T. Liu
Z. Huang
Zhaoxi Chen
Guangcong Wang
Shoukang Hu
Liao Shen
Huiqiang Sun
Z. Cao
Wei Li
Ziwei Liu
VGen
3DGS
86
0
0
26 Mar 2025
StoryAgent: Customized Storytelling Video Generation via Multi-Agent
  Collaboration
StoryAgent: Customized Storytelling Video Generation via Multi-Agent Collaboration
Panwen Hu
Jin Jiang
Jianqi Chen
Mingfei Han
Shengcai Liao
Xiaojun Chang
Xiaodan Liang
VGen
DiffM
43
5
0
07 Nov 2024
PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation
PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation
Shaowei Liu
Zhongzheng Ren
Saurabh Gupta
Shenlong Wang
VGen
DiffM
PINN
50
36
0
27 Sep 2024
Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing
  Reliability,Reproducibility, and Practicality
Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability,Reproducibility, and Practicality
Tianle Zhang
Langtian Ma
Yuchen Yan
Yuchen Zhang
Kai Wang
...
Wenqi Shao
Yang You
Yu Qiao
Ping Luo
Kaipeng Zhang
VGen
72
2
0
13 Jun 2024
MOFA-Video: Controllable Image Animation via Generative Motion Field
  Adaptions in Frozen Image-to-Video Diffusion Model
MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
Muyao Niu
Xiaodong Cun
Xintao Wang
Yong Zhang
Ying Shan
Yinqiang Zheng
DiffM
VGen
53
31
0
30 May 2024
Dance Any Beat: Blending Beats with Visuals in Dance Video Generation
Dance Any Beat: Blending Beats with Visuals in Dance Video Generation
Xuanchen Wang
Heng Wang
Dongnan Liu
Weidong Cai
46
3
0
15 May 2024
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion
  Models
TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models
Haomiao Ni
Bernhard Egger
Suhas Lohit
A. Cherian
Ye Wang
T. Koike-Akino
S. X. Huang
Tim K. Marks
DiffM
50
12
0
25 Apr 2024
Follow-Your-Click: Open-domain Regional Image Animation via Short
  Prompts
Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts
Yue Ma
Yin-Yin He
Hongfa Wang
Andong Wang
Chenyang Qi
...
Xiu Li
Zhifeng Li
H. Shum
Wei Liu
Qifeng Chen
VGen
DiffM
114
39
0
13 Mar 2024
DragAnything: Motion Control for Anything using Entity Representation
DragAnything: Motion Control for Anything using Entity Representation
Wejia Wu
Zhuang Li
Yuchao Gu
Rui Zhao
Yefei He
David Junhao Zhang
Mike Zheng Shou
Yan Li
Tingting Gao
Di Zhang
VGen
93
51
0
12 Mar 2024
Motion-I2V: Consistent and Controllable Image-to-Video Generation with
  Explicit Motion Modeling
Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Xiaoyu Shi
Zhaoyang Huang
Fu-Yun Wang
Weikang Bian
Dasong Li
...
Ka Chun Cheung
Simon See
Hongwei Qin
Jifeng Da
Hongsheng Li
VGen
DiffM
45
81
0
29 Jan 2024
GenDeF: Learning Generative Deformation Field for Video Generation
GenDeF: Learning Generative Deformation Field for Video Generation
Wen Wang
Kecheng Zheng
Qiuyu Wang
Hao Chen
Zifan Shi
Ceyuan Yang
Yujun Shen
Chunhua Shen
VGen
DiffM
54
2
0
07 Dec 2023
AnimateAnything: Fine-Grained Open Domain Image Animation with Motion
  Guidance
AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance
Zuozhuo Dai
Zhenghao Zhang
Yao Yao
Bingxue Qiu
Siyu Zhu
Long Qin
Weizhi Wang
VGen
28
45
0
21 Nov 2023
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
Jinbo Xing
Menghan Xia
Yong Zhang
Haoxin Chen
Wangbo Yu
Hanyuan Liu
Xintao Wang
Tien-Tsin Wong
Ying Shan
VGen
47
225
0
18 Oct 2023
FashionFlow: Leveraging Diffusion Models for Dynamic Fashion Video
  Synthesis from Static Imagery
FashionFlow: Leveraging Diffusion Models for Dynamic Fashion Video Synthesis from Static Imagery
Tasin Islam
A. Miron
Xiaohui Liu
Yongmin Li
DiffM
34
3
0
29 Sep 2023
DragNUWA: Fine-grained Control in Video Generation by Integrating Text,
  Image, and Trajectory
DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
Sheng-Siang Yin
Chenfei Wu
Jian Liang
Jie Shi
Houqiang Li
Gong Ming
Nan Duan
VGen
23
132
0
16 Aug 2023
Learn the Force We Can: Enabling Sparse Motion Control in Multi-Object
  Video Generation
Learn the Force We Can: Enabling Sparse Motion Control in Multi-Object Video Generation
A. Davtyan
Paolo Favaro
VGen
32
4
0
06 Jun 2023
Motion-Conditioned Diffusion Model for Controllable Video Synthesis
Motion-Conditioned Diffusion Model for Controllable Video Synthesis
Tsai-Shien Chen
C. Lin
Hung-Yu Tseng
Nayeon Lee
Ming Yang
DiffM
VGen
81
62
0
27 Apr 2023
Conditional Image-to-Video Generation with Latent Flow Diffusion Models
Conditional Image-to-Video Generation with Latent Flow Diffusion Models
Haomiao Ni
Changhao Shi
Kaican Li
Sharon X. Huang
Martin Renqiang Min
VGen
DiffM
37
165
0
24 Mar 2023
TKN: Transformer-based Keypoint Prediction Network For Real-time Video
  Prediction
TKN: Transformer-based Keypoint Prediction Network For Real-time Video Prediction
Haoran Li
Pengyuan Zhou
Yi-Wen Lin
Y. Hao
Haiyong Xie
Yong Liao
ViT
AI4TS
19
1
0
17 Mar 2023
Blowing in the Wind: CycleNet for Human Cinemagraphs from Still Images
Blowing in the Wind: CycleNet for Human Cinemagraphs from Still Images
Hugo Bertiche Argila
Niloy J. Mitra
K. Kulkarni
C. Huang
Tuanfeng Y. Wang
Meysam Madadi
Sergio Escalera
Duygu Ceylan
16
16
0
15 Mar 2023
Controllable Video Generation by Learning the Underlying Dynamical
  System with Neural ODE
Controllable Video Generation by Learning the Underlying Dynamical System with Neural ODE
Yucheng Xu
Nanbo Li
A. Goel
Zijian Guo
Zonghai Yao
Hamidreza Kasaei
Mohammad-Sajad Kasaei
Zhibin Li
47
5
0
09 Mar 2023
Text-driven Video Prediction
Text-driven Video Prediction
Xue Song
Jingjing Chen
B. Zhu
Yu-Gang Jiang
VGen
12
4
0
06 Oct 2022
Exploring Optical-Flow-Guided Motion and Detection-Based Appearance for
  Temporal Sentence Grounding
Exploring Optical-Flow-Guided Motion and Detection-Based Appearance for Temporal Sentence Grounding
Daizong Liu
Xiang Fang
Wei Hu
Pan Zhou
27
37
0
06 Mar 2022
Show Me What and Tell Me How: Video Synthesis via Multimodal
  Conditioning
Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Ligong Han
Jian Ren
Hsin-Ying Lee
Francesco Barbieri
Kyle Olszewski
Shervin Minaee
Dimitris N. Metaxas
Sergey Tulyakov
DiffM
VGen
30
41
0
04 Mar 2022
Controllable Animation of Fluid Elements in Still Images
Controllable Animation of Fluid Elements in Still Images
Aniruddha Mahapatra
K. Kulkarni
VGen
46
47
0
06 Dec 2021
Make It Move: Controllable Image-to-Video Generation with Text
  Descriptions
Make It Move: Controllable Image-to-Video Generation with Text Descriptions
Yaosi Hu
Chong Luo
Zhenzhong Chen
VGen
30
86
0
06 Dec 2021
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis
A. Blattmann
Timo Milbich
Michael Dorkenwald
Bjorn Ommer
DiffM
VGen
19
41
0
06 Jul 2021
Stochastic Image-to-Video Synthesis using cINNs
Stochastic Image-to-Video Synthesis using cINNs
Michael Dorkenwald
Timo Milbich
A. Blattmann
Robin Rombach
Konstantinos G. Derpanis
Bjorn Ommer
DiffM
VGen
21
54
0
10 May 2021
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
309
10,391
0
12 Dec 2018
1