Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.16635
Cited By
MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model
31 August 2023
Jin Liu
Xi Wang
Xiaomeng Fu
Yesheng Chai
Cai Yu
Jiao Dai
Jizhong Han
DiffM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model"
6 / 6 papers shown
Title
VividListener: Expressive and Controllable Listener Dynamics Modeling for Multi-Modal Responsive Interaction
Shiying Li
Xingqun Qi
Bingkun Yang
Chen Weile
Zezhao Tian
Muyi Sun
Qifeng Liu
Man Zhang
Zhenan Sun
59
0
0
30 Apr 2025
CustomListener: Text-guided Responsive Interaction for User-friendly Listening Head Generation
Xi Liu
Ying Guo
Cheng Zhen
Tong Li
Yingying Ao
Pengfei Yan
DiffM
34
3
0
01 Mar 2024
Talking Head from Speech Audio using a Pre-trained Image Generator
M. M. Alghamdi
He-Nan Wang
A. Bulpitt
David C. Hogg
70
21
0
09 Sep 2022
EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model
Xinya Ji
Hang Zhou
Kaisiyuan Wang
Qianyi Wu
Wayne Wu
Feng Xu
Xun Cao
CVBM
54
157
0
30 May 2022
One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning
Suzhe Wang
Lincheng Li
Yueqing Ding
Xin Yu
CVBM
59
117
0
06 Dec 2021
PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering
Yurui Ren
Gezhong Li
Yuanqi Chen
Thomas H. Li
Shan Liu
DiffM
VGen
49
224
0
17 Sep 2021
1