In this paper, we introduce AffectVLM, a vision-language model designed to integrate multiviews for a semantically rich and visually comprehensive understanding of facial emotions from 3D/4D data. To effectively capture visual features, we propose a joint representation learning framework paired with a novel gradient-friendly loss function that accelerates model convergence towards optimal feature representation. Additionally, we introduce augmented textual prompts to enhance the model's linguistic capabilities and employ mixed view augmentation to expand the visual dataset. We also develop a Streamlit app for a real-time interactive inference and enable the model for distributed learning. Extensive experiments validate the superior performance of AffectVLM across multiple benchmarks.
View on arXiv@article{behzad2025_2504.19739, title={ Contrastive Language-Image Learning with Augmented Textual Prompts for 3D/4D FER Using Vision-Language Model }, author={ Muzammil Behzad and Guoying Zhao }, journal={arXiv preprint arXiv:2504.19739}, year={ 2025 } }