GaussianHead: Impressive Head Avatars with Learnable Gaussian Diffusion
- 3DGS

Previous head avatar methods have primarily relied on fixed-shape scene primitives, lacking a balance between geometric topology, texture details, and computational efficiency. Some hybrid neural network methods (e.g., planes and voxels) gained advantages in fast rendering, but they all used axis-aligned mappings to extract features explicitly, leading to issues of axis-aligned bias and feature dilution. We present GaussianHead, which utilizes deformable 3D Gaussians as building blocks for the head avatars. We propose a novel methodology where the core Gaussians designated for rendering undergo dynamic diffusion before being mapped onto a factor plane to acquire canonical sub-factors. Through our factor blending strategy, the canonical features for the core Gaussians used in rendering are obtained. This approach deviates from the previous practice of utilizing axis-aligned mappings, especially improving the representation capability of subtle structures such as teeth, wrinkles, hair, and even facial pores. In comparison to state-of-the-art methods, our unique primitive selection and factor decomposition in GaussianHead deliver superior visual results while maintaining rendering performance (0.1 seconds per frame). Code will released for research.
View on arXiv@article{wang2025_2312.01632, title={ GaussianHead: High-fidelity Head Avatars with Learnable Gaussian Derivation }, author={ Jie Wang and Jiu-Cheng Xie and Xianyan Li and Feng Xu and Chi-Man Pun and Hao Gao }, journal={arXiv preprint arXiv:2312.01632}, year={ 2025 } }