Efficient Listener: Dyadic Facial Motion Synthesis via Action Diffusion

Generating realistic listener facial motions in dyadic conversations remains challenging due to the high-dimensional action space and temporal dependency requirements. Existing approaches usually consider extracting 3D Morphable Model (3DMM) coefficients and modeling in the 3DMM space. However, this makes the computational speed of the 3DMM a bottleneck, making it difficult to achieve real-time interactive responses. To tackle this problem, we propose Facial Action Diffusion (FAD), which introduces the diffusion methods from the field of image generation to achieve efficient facial action generation. We further build the Efficient Listener Network (ELNet) specially designed to accommodate both the visual and audio information of the speaker as input. Considering of FAD and ELNet, the proposed method learns effective listener facial motion representations and leads to improvements of performance over the state-of-the-art methods while reducing 99% computational time.
View on arXiv@article{wang2025_2504.20685, title={ Efficient Listener: Dyadic Facial Motion Synthesis via Action Diffusion }, author={ Zesheng Wang and Alexandre Bruckert and Patrick Le Callet and Guangtao Zhai }, journal={arXiv preprint arXiv:2504.20685}, year={ 2025 } }