ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.04766
  4. Cited By
DualTalker: A Cross-Modal Dual Learning Approach for Speech-Driven 3D
  Facial Animation
v1v2 (latest)

DualTalker: A Cross-Modal Dual Learning Approach for Speech-Driven 3D Facial Animation

8 November 2023
Guinan Su
Yanwu Yang
Zhifeng Li
    VGen
ArXiv (abs)PDFHTMLGithub (9★)

Papers citing "DualTalker: A Cross-Modal Dual Learning Approach for Speech-Driven 3D Facial Animation"

1 / 1 papers shown
Title
JambaTalk: Speech-Driven 3D Talking Head Generation Based on Hybrid
  Transformer-Mamba Language Model
JambaTalk: Speech-Driven 3D Talking Head Generation Based on Hybrid Transformer-Mamba Language Model
Farzaneh Jafari
Stefano Berretti
Anup Basu
Mamba
78
1
0
03 Aug 2024
1