FaceCrafter: Identity-Conditional Diffusion with Disentangled Control over Facial Pose, Expression, and Emotion

Human facial images encode a rich spectrum of information, encompassing both stable identity-related traits and mutable attributes such as pose, expression, and emo- tion. While recent advances in image generation have enabled high-quality identity- conditional face synthesis, precise control over non-identity attributes remains challeng- ing, and disentangling identity from these mutable factors is particularly difficult. To address these limitations, we propose a novel identity-conditional diffusion model that introduces two lightweight control modules designed to independently manipulate facial pose, expression, and emotion without compromising identity preservation. These mod- ules are embedded within the cross-attention layers of the base diffusion model, enabling precise attribute control with minimal parameter overhead. Furthermore, our tailored training strategy, which leverages cross-attention between the identity feature and each non-identity control feature, encourages identity features to remain orthogonal to control signals, enhancing controllability and diversity. Quantitative and qualitative evaluations, along with perceptual user studies, demonstrate that our method surpasses existing ap- proaches in terms of control accuracy over pose, expression, and emotion, while also improving generative diversity under identity-only conditioning.
View on arXiv@article{mishima2025_2505.15313, title={ FaceCrafter: Identity-Conditional Diffusion with Disentangled Control over Facial Pose, Expression, and Emotion }, author={ Kazuaki Mishima and Antoni Bigata Casademunt and Stavros Petridis and Maja Pantic and Kenji Suzuki }, journal={arXiv preprint arXiv:2505.15313}, year={ 2025 } }