Disentangling 3D from Large Vision-Language Models for Controlled Portrait Generation
- 3DV

We consider the problem of disentangling 3D from large vision-language models, which we show on generative 3D portraits. This allows free-form text control of appearance attributes like age, hair style, and glasses, and 3D geometry control of face expression and camera pose. In this setting, we assume we use a pre-trained large vision-language model (LVLM; CLIP) to generate from a smaller 2D dataset with no additional paired labels and with a pre-defined 3D morphable model (FLAME). First, we disentangle using canonicalization to a 2D reference frame from a deformable neural 3D triplane representation. But another form of entanglement arises from the significant noise in the LVLM's embedding space that describes irrelevant features. This damages output quality and diversity, but we overcome this with a Jacobian regularization that can be computed efficiently with a stochastic approximator. Compared to existing methods, our approach produces portraits with added text and 3D control, where portraits remain consistent when either control is changed. Broadly, this approach lets creators control 3D generators on their own 2D face data without needing resources to label large data or train large models.
View on arXiv@article{huang2025_2506.14015, title={ Disentangling 3D from Large Vision-Language Models for Controlled Portrait Generation }, author={ Nick Yiwen Huang and Akin Caliskan and Berkay Kicanaoglu and James Tompkin and Hyeongwoo Kim }, journal={arXiv preprint arXiv:2506.14015}, year={ 2025 } }