ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.05684
  4. Cited By
Generating coherent spontaneous speech and gesture from text

Generating coherent spontaneous speech and gesture from text

14 January 2021
Simon Alexanderson
Éva Székely
G. Henter
Taras Kucherenko
Jonas Beskow
    SLR
ArXivPDFHTML

Papers citing "Generating coherent spontaneous speech and gesture from text"

11 / 11 papers shown
Title
Fake it to make it: Using synthetic data to remedy the data shortage in
  joint multimodal speech-and-gesture synthesis
Fake it to make it: Using synthetic data to remedy the data shortage in joint multimodal speech-and-gesture synthesis
Shivam Mehta
Anna Deichler
Jim O'Regan
Birger Moëll
Jonas Beskow
G. Henter
Simon Alexanderson
46
4
0
30 Apr 2024
Unified speech and gesture synthesis using flow matching
Unified speech and gesture synthesis using flow matching
Shivam Mehta
Ruibo Tu
Simon Alexanderson
Jonas Beskow
Éva Székely
G. Henter
22
3
0
08 Oct 2023
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons
UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons
Sicheng Yang
Z. Wang
Zhiyong Wu
Minglei Li
Zhensong Zhang
...
Lei Hao
Songcen Xu
Xiaofei Wu
Changpeng Yang
Zonghong Dai
DiffM
44
14
0
13 Sep 2023
Diff-TTSG: Denoising probabilistic integrated speech and gesture
  synthesis
Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis
Shivam Mehta
Siyang Wang
Simon Alexanderson
Jonas Beskow
Éva Székely
G. Henter
DiffM
26
14
0
15 Jun 2023
QPGesture: Quantization-Based and Phase-Guided Motion Matching for
  Natural Speech-Driven Gesture Generation
QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation
Sicheng Yang
Zhiyong Wu
Minglei Li
Zhensong Zhang
Lei Hao
Weihong Bao
Hao-Wen Zhuang
SLR
24
41
0
18 May 2023
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation
  with Diffusion Models
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models
Sicheng Yang
Zhiyong Wu
Minglei Li
Zhensong Zhang
Lei Hao
Weihong Bao
Ming Cheng
Long Xiao
24
66
0
08 May 2023
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
A Comprehensive Review of Data-Driven Co-Speech Gesture Generation
Simbarashe Nyatsanga
Taras Kucherenko
Chaitanya Ahuja
G. Henter
Michael Neff
SLR
33
90
0
13 Jan 2023
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion
  Models
Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models
Simon Alexanderson
Rajmund Nagy
Jonas Beskow
G. Henter
DiffM
VGen
24
165
0
17 Nov 2022
BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for
  Conversational Gestures Synthesis
BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
Haiyang Liu
Zihao Zhu
Naoya Iwamoto
Yichen Peng
Zhengqing Li
You Zhou
E. Bozkurt
Bo Zheng
SLR
CVBM
23
136
0
10 Mar 2022
Integrated Speech and Gesture Synthesis
Integrated Speech and Gesture Synthesis
Siyang Wang
Simon Alexanderson
Joakim Gustafson
Jonas Beskow
G. Henter
Éva Székely
37
19
0
25 Aug 2021
A Framework for Integrating Gesture Generation Models into Interactive
  Conversational Agents
A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents
Rajmund Nagy
Taras Kucherenko
Birger Moell
André Pereira
Hedvig Kjellström
Ulysses Bernardet
26
12
0
24 Feb 2021
1