Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.04474
Cited By
v1
v2
v3 (latest)
Speech Driven Video Editing via an Audio-Conditioned Diffusion Model
10 January 2023
Dan Bigioi
Shubhajit Basak
Michał Stypułkowski
Maciej Ziȩba
H. Jordan
R. Mcdonnell
Peter Corcoran
DiffM
VGen
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Speech Driven Video Editing via an Audio-Conditioned Diffusion Model"
7 / 57 papers shown
Title
Talking Face Generation by Conditional Recurrent Adversarial Network
Yang Song
Jingwen Zhu
Dawei Li
Xiaolong Wang
Hairong Qi
CVBM
130
194
0
13 Apr 2018
MoCoGAN: Decomposing Motion and Content for Video Generation
Sergey Tulyakov
Ming-Yuan Liu
Xiaodong Yang
Jan Kautz
GAN
131
1,147
0
17 Jul 2017
VoxCeleb: a large-scale speaker identification dataset
Arsha Nagrani
Joon Son Chung
Andrew Zisserman
127
2,274
0
26 Jun 2017
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Jun-Yan Zhu
Taesung Park
Phillip Isola
Alexei A. Efros
GAN
125
5,553
0
30 Mar 2017
Lip Reading Sentences in the Wild
Joon Son Chung
A. Senior
Oriol Vinyals
Andrew Zisserman
261
790
0
16 Nov 2016
LipNet: End-to-End Sentence-level Lipreading
Yannis Assael
Brendan Shillingford
Shimon Whiteson
Nando de Freitas
82
397
0
05 Nov 2016
Deep Unsupervised Learning using Nonequilibrium Thermodynamics
Jascha Narain Sohl-Dickstein
Eric A. Weiss
Niru Maheswaranathan
Surya Ganguli
SyDa
DiffM
303
6,949
0
12 Mar 2015
Previous
1
2