13
0

ReactDiff: Latent Diffusion for Facial Reaction Generation

Abstract

Given the audio-visual clip of the speaker, facial reaction generation aims to predict the listener's facial reactions. The challenge lies in capturing the relevance between video and audio while balancing appropriateness, realism, and diversity. While prior works have mostly focused on uni-modal inputs or simplified reaction mappings, recent approaches such as PerFRDiff have explored multi-modal inputs and the one-to-many nature of appropriate reaction mappings. In this work, we propose the Facial Reaction Diffusion (ReactDiff) framework that uniquely integrates a Multi-Modality Transformer with conditional diffusion in the latent space for enhanced reaction generation. Unlike existing methods, ReactDiff leverages intra- and inter-class attention for fine-grained multi-modal interaction, while the latent diffusion process between the encoder and decoder enables diverse yet contextually appropriate outputs. Experimental results demonstrate that ReactDiff significantly outperforms existing approaches, achieving a facial reaction correlation of 0.26 and diversity score of 0.094 while maintaining competitive realism. The code is open-sourced at \href{this https URL}{github}.

View on arXiv
@article{li2025_2505.14151,
  title={ ReactDiff: Latent Diffusion for Facial Reaction Generation },
  author={ Jiaming Li and Sheng Wang and Xin Wang and Yitao Zhu and Honglin Xiong and Zixu Zhuang and Qian Wang },
  journal={arXiv preprint arXiv:2505.14151},
  year={ 2025 }
}
Comments on this paper