ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.09326
  4. Cited By
Gesticulator: A framework for semantically-aware speech-driven gesture
  generation

Gesticulator: A framework for semantically-aware speech-driven gesture generation

25 January 2020
Taras Kucherenko
Patrik Jonell
S. V. Waveren
G. Henter
Simon Alexanderson
Iolanda Leite
Hedvig Kjellström
    SLR
ArXivPDFHTML

Papers citing "Gesticulator: A framework for semantically-aware speech-driven gesture generation"

47 / 47 papers shown
Title
AsynFusion: Towards Asynchronous Latent Consistency Models for Decoupled Whole-Body Audio-Driven Avatars
AsynFusion: Towards Asynchronous Latent Consistency Models for Decoupled Whole-Body Audio-Driven Avatars
T. Zhang
Jian Zhao
Yuer Li
Zheng Zhu
Ping Hu
Zhaoxin Fan
Wenjun Wu
Xuelong Li
21
0
0
21 May 2025
Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing
Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing
Pei Xu
Ruocheng Wang
66
2
0
20 Feb 2025
Incorporating Spatial Awareness in Data-Driven Gesture Generation for
  Virtual Agents
Incorporating Spatial Awareness in Data-Driven Gesture Generation for Virtual Agents
Anna Deichler
Simon Alexanderson
Jonas Beskow
44
0
0
07 Aug 2024
Investigating the impact of 2D gesture representation on co-speech
  gesture generation
Investigating the impact of 2D gesture representation on co-speech gesture generation
Teo Guichoux
Laure Soulier
Nicolas Obin
Catherine Pelachaud
SLR
24
0
0
21 Jun 2024
Fake it to make it: Using synthetic data to remedy the data shortage in
  joint multimodal speech-and-gesture synthesis
Fake it to make it: Using synthetic data to remedy the data shortage in joint multimodal speech-and-gesture synthesis
Shivam Mehta
Anna Deichler
Jim O'Regan
Birger Moëll
Jonas Beskow
G. Henter
Simon Alexanderson
51
4
0
30 Apr 2024
SpeechAct: Towards Generating Whole-body Motion from Speech
Jinsong Zhang
Minjie Zhu
Yuxiang Zhang
Yebin Liu
Kun Li
45
0
0
29 Nov 2023
Large language models in textual analysis for gesture selection
Large language models in textual analysis for gesture selection
Laura Birka Hensel
Nutchanon Yongsatianchot
P. Torshizi
E. Minucci
Stacy Marsella
SLR
38
7
0
04 Oct 2023
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons
Sicheng Yang
Zehao Wang
Zhiyong Wu
Minglei Li
Zhensong Zhang
...
Lei Hao
Songcen Xu
Xiaofei Wu
Changpeng Yang
Zonghong Dai
DiffM
56
14
0
13 Sep 2023
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio
  Representation
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation
Anna Deichler
Shivam Mehta
Simon Alexanderson
Jonas Beskow
DiffM
23
23
0
11 Sep 2023
BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer
BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer
Kunkun Pang
Dafei Qin
Yingruo Fan
Julian Habekost
Takaaki Shiratori
Junichi Yamagishi
Taku Komura
SLR
ViT
28
19
0
07 Sep 2023
Audio is all in one: speech-driven gesture synthetics using WavLM pre-trained model
Fan Zhang
Naye Ji
Fuxing Gao
Siyuan Zhao
Zhaohan Wang
Shunman Li
32
0
0
11 Aug 2023
Human Motion Generation: A Survey
Human Motion Generation: A Survey
Wentao Zhu
Xiaoxuan Ma
Dongwoo Ro
Hai Ci
Jinlu Zhang
Jiaxin Shi
Feng Gao
Qi Tian
Yizhou Wang
VGen
52
53
0
20 Jul 2023
EMoG: Synthesizing Emotive Co-speech 3D Gesture with Diffusion Model
EMoG: Synthesizing Emotive Co-speech 3D Gesture with Diffusion Model
Li-Ping Yin
Yijun Wang
Tianyu He
Jinming Liu
Wei Zhao
Bohan Li
Xin Jin
Jianxin Lin
DiffM
37
14
0
20 Jun 2023
QPGesture: Quantization-Based and Phase-Guided Motion Matching for
  Natural Speech-Driven Gesture Generation
QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation
Sicheng Yang
Zhiyong Wu
Minglei Li
Zhensong Zhang
Lei Hao
Weihong Bao
Hao-Wen Zhuang
SLR
26
41
0
18 May 2023
Evaluating gesture generation in a large-scale open challenge: The GENEA
  Challenge 2022
Evaluating gesture generation in a large-scale open challenge: The GENEA Challenge 2022
Taras Kucherenko
Pieter Wolfert
Youngwoo Yoon
Carla Viegas
Teodor Nikolov
Mihail Tsakov
G. Henter
37
24
0
15 Mar 2023
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
Simbarashe Nyatsanga
Taras Kucherenko
Chaitanya Ahuja
G. Henter
Michael Neff
SLR
44
90
0
13 Jan 2023
Generating Holistic 3D Human Motion from Speech
Generating Holistic 3D Human Motion from Speech
Hongwei Yi
Hualin Liang
Yifei Liu
Qiong Cao
Yandong Wen
Timo Bolkart
Dacheng Tao
Michael J. Black
SLR
34
145
0
08 Dec 2022
Audio-Driven Co-Speech Gesture Video Generation
Audio-Driven Co-Speech Gesture Video Generation
Xian Liu
Qianyi Wu
Hang Zhou
Yuanqi Du
Wayne Wu
Dahua Lin
Ziwei Liu
SLR
VGen
39
49
0
05 Dec 2022
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion
  Models
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models
Simon Alexanderson
Rajmund Nagy
Jonas Beskow
G. Henter
DiffM
VGen
29
166
0
17 Nov 2022
Evaluating Data-Driven Co-Speech Gestures of Embodied Conversational
  Agents through Real-Time Interaction
Evaluating Data-Driven Co-Speech Gestures of Embodied Conversational Agents through Real-Time Interaction
Yuan He
André Pereira
Taras Kucherenko
30
10
0
13 Oct 2022
Deep Gesture Generation for Social Robots Using Type-Specific Libraries
Deep Gesture Generation for Social Robots Using Type-Specific Libraries
Hitoshi Teshima
Naoki Wake
Diego Thomas
Yuta Nakashima
Hiroshi Kawasaki
Katsushi Ikeuchi
SLR
36
7
0
13 Oct 2022
Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with
  Hierarchical Neural Embeddings
Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings
Tenglong Ao
Qingzhe Gao
Yuke Lou
Baoquan Chen
Libin Liu
SLR
37
59
0
04 Oct 2022
ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech
ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech
Saeed Ghorbani
Ylva Ferstl
Daniel Holden
N. Troje
M. Carbonneau
41
79
0
15 Sep 2022
The ReprGesture entry to the GENEA Challenge 2022
The ReprGesture entry to the GENEA Challenge 2022
Sicheng Yang
Zhiyong Wu
Minglei Li
Mengchen Zhao
Jiuxin Lin
Liyang Chen
Weihong Bao
33
11
0
25 Aug 2022
The GENEA Challenge 2022: A large evaluation of data-driven co-speech
  gesture generation
The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation
Youngwoo Yoon
Pieter Wolfert
Taras Kucherenko
Carla Viegas
Teodor Nikolov
Mihail Tsakov
G. Henter
VGen
37
81
0
22 Aug 2022
Learning in Audio-visual Context: A Review, Analysis, and New
  Perspective
Learning in Audio-visual Context: A Review, Analysis, and New Perspective
Yake Wei
Di Hu
Yapeng Tian
Xuelong Li
46
55
0
20 Aug 2022
Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech
  using Adversarial Disentanglement of Multimodal Style Encoding
Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding
Mireille Fares
Michele Grimaldi
Catherine Pelachaud
Nicolas Obin
32
17
0
03 Aug 2022
Audio-driven Neural Gesture Reenactment with Video Motion Graphs
Audio-driven Neural Gesture Reenactment with Video Motion Graphs
Yang Zhou
Jimei Yang
Dingzeyu Li
Jun Saito
Deepali Aneja
E. Kalogerakis
DiffM
SLR
42
20
0
23 Jul 2022
Representation Learning of Image Schema
Representation Learning of Image Schema
Fajrian Yunus
Chloé Clavel
Catherine Pelachaud
OCL
15
0
0
17 Jul 2022
Analysis of Co-Laughter Gesture Relationship on RGB videos in Dyadic
  Conversation Contex
Analysis of Co-Laughter Gesture Relationship on RGB videos in Dyadic Conversation Contex
Hugo Bohy
Ahmad Hammoudeh
Antoine Maiorca
Stéphane Dupont
Thierry Dutoit
19
2
0
20 May 2022
Joint Audio-Text Model for Expressive Speech-Driven 3D Facial Animation
Joint Audio-Text Model for Expressive Speech-Driven 3D Facial Animation
Yingruo Fan
Zhaojiang Lin
Jun Saito
Wenping Wang
Taku Komura
36
21
0
04 Dec 2021
Integrated Speech and Gesture Synthesis
Integrated Speech and Gesture Synthesis
Siyang Wang
Simon Alexanderson
Joakim Gustafson
Jonas Beskow
G. Henter
Éva Székely
37
19
0
25 Aug 2021
Multimodal analysis of the predictability of hand-gesture properties
Multimodal analysis of the predictability of hand-gesture properties
Taras Kucherenko
Rajmund Nagy
Michael Neff
Hedvig Kjellström
G. Henter
34
22
0
12 Aug 2021
To Rate or Not To Rate: Investigating Evaluation Methods for Generated
  Co-Speech Gestures
To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures
Pieter Wolfert
J. Girard
Taras Kucherenko
Tony Belpaeme
32
16
0
12 Aug 2021
SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied
  Conversational Agents
SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied Conversational Agents
Youngwoo Yoon
Keunwoo Park
Minsu Jang
Jaehong Kim
Geehyuk Lee
VGen
SLR
39
19
0
10 Aug 2021
Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with
  Generative Adversarial Affective Expression Learning
Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning
Uttaran Bhattacharya
Elizabeth Childs
Nicholas Rewkowski
Tianyi Zhou
SLR
GAN
27
81
0
31 Jul 2021
Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for
  Generating Representational Gestures from Speech
Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for Generating Representational Gestures from Speech
Taras Kucherenko
Rajmund Nagy
Patrik Jonell
Michael Neff
Hedvig Kjellström
G. Henter
22
18
0
28 Jun 2021
Graph-based Normalizing Flow for Human Motion Generation and
  Reconstruction
Graph-based Normalizing Flow for Human Motion Generation and Reconstruction
Wenjie Yin
Hang Yin
Danica Kragic
Mårten Björkman
3DH
13
15
0
07 Apr 2021
Toward Automated Generation of Affective Gestures from Text:A
  Theory-Driven Approach
Toward Automated Generation of Affective Gestures from Text:A Theory-Driven Approach
Micol Spitale
Maja J. Matarić
25
1
0
04 Mar 2021
A Framework for Integrating Gesture Generation Models into Interactive
  Conversational Agents
A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents
Rajmund Nagy
Taras Kucherenko
Birger Moell
André Pereira
Hedvig Kjellström
Ulysses Bernardet
34
12
0
24 Feb 2021
A large, crowdsourced evaluation of gesture generation systems on common
  data: The GENEA Challenge 2020
A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020
Taras Kucherenko
Patrik Jonell
Youngwoo Yoon
Pieter Wolfert
G. Henter
38
74
0
23 Feb 2021
Learning Speech-driven 3D Conversational Gestures from Video
Learning Speech-driven 3D Conversational Gestures from Video
I. Habibie
Weipeng Xu
Dushyant Mehta
Lingjie Liu
Hans-Peter Seidel
Gerard Pons-Moll
Mohamed A. Elgharib
Christian Theobalt
SLR
CVBM
3DH
40
108
0
13 Feb 2021
A Review of Evaluation Practices of Gesture Generation in Embodied
  Conversational Agents
A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents
Pieter Wolfert
Nicole L. Robinson
Tony Belpaeme
49
50
0
11 Jan 2021
Quantitative analysis of robot gesticulation behavior
Quantitative analysis of robot gesticulation behavior
Unai Zabala
I. Rodriguez
J. M. Martínez-Otzeta
I. Irigoien
E. Lazkano
SLR
31
10
0
22 Oct 2020
Understanding the Predictability of Gesture Parameters from Speech and
  their Perceptual Importance
Understanding the Predictability of Gesture Parameters from Speech and their Perceptual Importance
Ylva Ferstl
Michael Neff
R. Mcdonnell
SLR
14
16
0
02 Oct 2020
Geometry-guided Dense Perspective Network for Speech-Driven Facial
  Animation
Geometry-guided Dense Perspective Network for Speech-Driven Facial Animation
Jing-ying Liu
Binyuan Hui
Kun Li
Yunke Liu
Yu-Kun Lai
Yuxiang Zhang
Yebin Liu
Jingyu Yang
3DH
CVBM
27
22
0
23 Aug 2020
Moving fast and slow: Analysis of representations and post-processing in
  speech-driven automatic gesture generation
Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation
Taras Kucherenko
Dai Hasegawa
Naoshi Kaneko
G. Henter
Hedvig Kjellström
23
41
0
16 Jul 2020
1