ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20861
11
0

Exploring Timeline Control for Facial Motion Generation

27 May 2025
Yifeng Ma
Jinwei Qi
Chaonan Ji
Peng Zhang
Bang Zhang
Zhidong Deng
Liefeng Bo
    VGen
ArXiv (abs)PDFHTML
Main:8 Pages
12 Figures
Bibliography:3 Pages
2 Tables
Abstract

This paper introduces a new control signal for facial motion generation: timeline control. Compared to audio and text signals, timelines provide more fine-grained control, such as generating specific facial motions with precise timing. Users can specify a multi-track timeline of facial actions arranged in temporal intervals, allowing precise control over the timing of each action. To model the timeline control capability, We first annotate the time intervals of facial actions in natural facial motion sequences at a frame-level granularity. This process is facilitated by Toeplitz Inverse Covariance-based Clustering to minimize human labor. Based on the annotations, we propose a diffusion-based generation model capable of generating facial motions that are natural and accurately aligned with input timelines. Our method supports text-guided motion generation by using ChatGPT to convert text into timelines. Experimental results show that our method can annotate facial action intervals with satisfactory accuracy, and produces natural facial motions accurately aligned with timelines.

View on arXiv
@article{ma2025_2505.20861,
  title={ Exploring Timeline Control for Facial Motion Generation },
  author={ Yifeng Ma and Jinwei Qi and Chaonan Ji and Peng Zhang and Bang Zhang and Zhidong Deng and Liefeng Bo },
  journal={arXiv preprint arXiv:2505.20861},
  year={ 2025 }
}
Comments on this paper