12
0

Co-Speech Gesture and Facial Expression Generation for Non-Photorealistic 3D Characters

Main:1 Pages
2 Figures
Bibliography:1 Pages
1 Tables
Abstract

With the advancement of conversational AI, research on bodily expressions, including gestures and facial expressions, has also progressed. However, many existing studies focus on photorealistic avatars, making them unsuitable for non-photorealistic characters, such as those found in anime. This study proposes methods for expressing emotions, including exaggerated expressions unique to non-photorealistic characters, by utilizing expression data extracted from comics and dialogue-specific semantic gestures. A user study demonstrated significant improvements across multiple aspects when compared to existing research.

View on arXiv
@article{omine2025_2506.16159,
  title={ Co-Speech Gesture and Facial Expression Generation for Non-Photorealistic 3D Characters },
  author={ Taisei Omine and Naoyuki Kawabata and Fuminori Homma },
  journal={arXiv preprint arXiv:2506.16159},
  year={ 2025 }
}
Comments on this paper