RefVNLI: Towards Scalable Evaluation of Subject-driven Text-to-image Generation
- EGVMVGen

Subject-driven text-to-image (T2I) generation aims to produce images that align with a given textual description, while preserving the visual identity from a referenced subject image. Despite its broad downstream applicability - ranging from enhanced personalization in image generation to consistent character representation in video rendering - progress in this field is limited by the lack of reliable automatic evaluation. Existing methods either assess only one aspect of the task (i.e., textual alignment or subject preservation), misalign with human judgments, or rely on costly API-based evaluation. To address this gap, we introduce RefVNLI, a cost-effective metric that evaluates both textual alignment and subject preservation in a single run. Trained on a large-scale dataset derived from video-reasoning benchmarks and image perturbations, RefVNLI outperforms or statistically matches existing baselines across multiple benchmarks and subject categories (e.g., \emph{Animal}, \emph{Object}), achieving up to 6.4-point gains in textual alignment and 5.9-point gains in subject preservation.
View on arXiv@article{slobodkin2025_2504.17502, title={ RefVNLI: Towards Scalable Evaluation of Subject-driven Text-to-image Generation }, author={ Aviv Slobodkin and Hagai Taitelbaum and Yonatan Bitton and Brian Gordon and Michal Sokolik and Nitzan Bitton Guetta and Almog Gueta and Royi Rassin and Dani Lischinski and Idan Szpektor }, journal={arXiv preprint arXiv:2504.17502}, year={ 2025 } }