ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.08451
  4. Cited By
Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion

18 April 2022
Evonne Ng
Hanbyul Joo
Liwen Hu
Hao Li
Trevor Darrell
Angjoo Kanazawa
Shiry Ginosar
    VGen
ArXivPDFHTML

Papers citing "Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion"

22 / 22 papers shown
Title
Latent Behavior Diffusion for Sequential Reaction Generation in Dyadic Setting
Latent Behavior Diffusion for Sequential Reaction Generation in Dyadic Setting
Minh-Duc Nguyen
Hyung-Jeong Yang
Soo-Hyung Kim
Ji-Eun Shin
Seung-Won Kim
DiffM
47
1
0
12 May 2025
VividListener: Expressive and Controllable Listener Dynamics Modeling for Multi-Modal Responsive Interaction
VividListener: Expressive and Controllable Listener Dynamics Modeling for Multi-Modal Responsive Interaction
Shiying Li
Xingqun Qi
Bingkun Yang
Chen Weile
Zezhao Tian
Muyi Sun
Qifeng Liu
Man Zhang
Zhenan Sun
66
0
0
30 Apr 2025
3DFacePolicy: Speech-Driven 3D Facial Animation with Diffusion Policy
3DFacePolicy: Speech-Driven 3D Facial Animation with Diffusion Policy
Xuanmeng Sha
Liyun Zhang
Tomohiro Mashita
Yuki Uranishi
VGen
32
0
0
17 Sep 2024
ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE
ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE
Sichun Wu
Kazi Injamamul Haque
Zerrin Yumak
VGen
41
2
0
12 Sep 2024
Synergy and Synchrony in Couple Dances
Synergy and Synchrony in Couple Dances
V. Maluleke
Lea Müller
Jathushan Rajasegaran
Georgios Pavlakos
Shiry Ginosar
Angjoo Kanazawa
Jitendra Malik
44
3
0
06 Sep 2024
DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation
DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation
Jisoo Kim
Jungbin Cho
Joonho Park
Soonmin Hwang
Da Eun Kim
Geon Kim
Youngjae Yu
62
1
0
12 Aug 2024
GLDiTalker: Speech-Driven 3D Facial Animation with Graph Latent Diffusion Transformer
GLDiTalker: Speech-Driven 3D Facial Animation with Graph Latent Diffusion Transformer
Yihong Lin
Zhaoxin Fan
Lingyu Xiong
Liang Peng
Xiandong Li
Xiandong Li
Wenxiong Kang
Xiandong Li
Huang Xu
49
3
0
03 Aug 2024
Robust Facial Reactions Generation: An Emotion-Aware Framework with
  Modality Compensation
Robust Facial Reactions Generation: An Emotion-Aware Framework with Modality Compensation
Guanyu Hu
Jie Wei
Siyang Song
Dimitrios Kollias
Xinyu Yang
Zhonglin Sun
Odysseus Kaloidas
50
1
0
22 Jul 2024
InterAct: Capture and Modelling of Realistic, Expressive and Interactive
  Activities between Two Persons in Daily Scenarios
InterAct: Capture and Modelling of Realistic, Expressive and Interactive Activities between Two Persons in Daily Scenarios
Yinghao Huang
Leo Ho
Dafei Qin
Mingyi Shi
Taku Komura
VGen
55
1
0
19 May 2024
CustomListener: Text-guided Responsive Interaction for User-friendly
  Listening Head Generation
CustomListener: Text-guided Responsive Interaction for User-friendly Listening Head Generation
Xi Liu
Ying Guo
Cheng Zhen
Tong Li
Yingying Ao
Pengfei Yan
DiffM
81
4
0
01 Mar 2024
SpeechAct: Towards Generating Whole-body Motion from Speech
Jinsong Zhang
Minjie Zhu
Yuxiang Zhang
Yebin Liu
Kun Li
49
0
0
29 Nov 2023
AdaMesh: Personalized Facial Expressions and Head Poses for Adaptive Speech-Driven 3D Facial Animation
AdaMesh: Personalized Facial Expressions and Head Poses for Adaptive Speech-Driven 3D Facial Animation
Liyang Chen
Weihong Bao
Shunwei Lei
Boshi Tang
Zhiyong Wu
Shiyin Kang
Haozhi Huang
Helen M. Meng
47
1
0
11 Oct 2023
Controlling Character Motions without Observable Driving Source
Controlling Character Motions without Observable Driving Source
Weiyuan Li
Bin Dai
Ziyi Zhou
Qi Yao
Baoyuan Wang
VGen
20
1
0
11 Aug 2023
Hierarchical Semantic Perceptual Listener Head Video Generation: A
  High-performance Pipeline
Hierarchical Semantic Perceptual Listener Head Video Generation: A High-performance Pipeline
Zhigang Chang
Weitai Hu
Q. Yang
Shibao Zheng
VGen
32
5
0
19 Jul 2023
Emotional Speech-Driven Animation with Content-Emotion Disentanglement
Emotional Speech-Driven Animation with Content-Emotion Disentanglement
Radek Danvevcek
Kiran Chhatre
Shashank Tripathi
Yandong Wen
Michael J. Black
Timo Bolkart
21
68
0
15 Jun 2023
AMII: Adaptive Multimodal Inter-personal and Intra-personal Model for
  Adapted Behavior Synthesis
AMII: Adaptive Multimodal Inter-personal and Intra-personal Model for Adapted Behavior Synthesis
Jieyeon Woo
Mireille Fares
Catherine Pelachaud
Catherine Achard
LLMAG
22
5
0
18 May 2023
Egocentric Auditory Attention Localization in Conversations
Egocentric Auditory Attention Localization in Conversations
Fiona Ryan
Hao Jiang
Abhinav Shukla
James M. Rehg
V. Ithapu
EgoV
42
16
0
28 Mar 2023
Affective Faces for Goal-Driven Dyadic Communication
Affective Faces for Goal-Driven Dyadic Communication
Scott Geng
Revant Teotia
Purva Tendulkar
Sachit Menon
Carl Vondrick
VGen
34
20
0
26 Jan 2023
Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in
  Complex 3D Environments
Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments
Jiye Lee
Hanbyul Joo
44
37
0
09 Jan 2023
Generating Holistic 3D Human Motion from Speech
Generating Holistic 3D Human Motion from Speech
Hongwei Yi
Hualin Liang
Yifei Liu
Qiong Cao
Yandong Wen
Timo Bolkart
Dacheng Tao
Michael J. Black
SLR
38
145
0
08 Dec 2022
Audio-Driven Co-Speech Gesture Video Generation
Audio-Driven Co-Speech Gesture Video Generation
Xian Liu
Qianyi Wu
Hang Zhou
Yuanqi Du
Wayne Wu
Dahua Lin
Ziwei Liu
SLR
VGen
44
49
0
05 Dec 2022
It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying
It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying
Eley Ng
Ziang Liu
Monroe Kennedy
38
18
0
26 Sep 2022
1