ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03369
  4. Cited By
Analyzing Input and Output Representations for Speech-Driven Gesture
  Generation

Analyzing Input and Output Representations for Speech-Driven Gesture Generation

8 March 2019
Taras Kucherenko
Dai Hasegawa
G. Henter
Naoshi Kaneko
Hedvig Kjellström
ArXivPDFHTML

Papers citing "Analyzing Input and Output Representations for Speech-Driven Gesture Generation"

10 / 60 papers shown
Title
A Review of Evaluation Practices of Gesture Generation in Embodied
  Conversational Agents
A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents
Pieter Wolfert
Nicole L. Robinson
Tony Belpaeme
49
50
0
11 Jan 2021
Understanding the Predictability of Gesture Parameters from Speech and
  their Perceptual Importance
Understanding the Predictability of Gesture Parameters from Speech and their Perceptual Importance
Ylva Ferstl
Michael Neff
R. Mcdonnell
SLR
14
16
0
02 Oct 2020
Can we trust online crowdworkers? Comparing online and offline
  participants in a preference test of virtual agents
Can we trust online crowdworkers? Comparing online and offline participants in a preference test of virtual agents
Patrik Jonell
Taras Kucherenko
Ilaria Torre
Jonas Beskow
10
27
0
22 Sep 2020
Speech Gesture Generation from the Trimodal Context of Text, Audio, and
  Speaker Identity
Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity
Youngwoo Yoon
Bok Cha
Joo-Haeng Lee
Minsu Jang
Jaeyeon Lee
Jaehong Kim
Geehyuk Lee
9
277
0
04 Sep 2020
Sequence-to-Sequence Predictive Model: From Prosody To Communicative
  Gestures
Sequence-to-Sequence Predictive Model: From Prosody To Communicative Gestures
Fajrian Yunus
Chloé Clavel
Catherine Pelachaud
SLR
11
16
0
17 Aug 2020
Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker
  Conditional-Mixture Approach
Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach
Chaitanya Ahuja
Dong Won Lee
Y. Nakano
Louis-Philippe Morency
11
100
0
24 Jul 2020
Moving fast and slow: Analysis of representations and post-processing in
  speech-driven automatic gesture generation
Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation
Taras Kucherenko
Dai Hasegawa
Naoshi Kaneko
G. Henter
Hedvig Kjellström
19
41
0
16 Jul 2020
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation
  of Facial Gestures in Dyadic Settings
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings
Patrik Jonell
Taras Kucherenko
G. Henter
Jonas Beskow
CVBM
17
61
0
11 Jun 2020
Gesticulator: A framework for semantically-aware speech-driven gesture
  generation
Gesticulator: A framework for semantically-aware speech-driven gesture generation
Taras Kucherenko
Patrik Jonell
S. V. Waveren
G. Henter
Simon Alexanderson
Iolanda Leite
Hedvig Kjellström
SLR
19
178
0
25 Jan 2020
MoGlow: Probabilistic and controllable motion synthesis using
  normalising flows
MoGlow: Probabilistic and controllable motion synthesis using normalising flows
G. Henter
Simon Alexanderson
Jonas Beskow
39
97
0
16 May 2019
Previous
12