61

TIQA: Human-Aligned Text Quality Assessment in Generated Images

Kirill Koltsov
Aleksandr Gushchin
Dmitriy Vatolin
Anastasia Antsiferova
Main:8 Pages
14 Figures
Bibliography:3 Pages
8 Tables
Appendix:15 Pages
Abstract

Text rendering remains a persistent failure mode of modern text-to-image models (T2I), yet existing evaluations rely on OCR correctness or VLM-based judging procedures that are poorly aligned with perceptual text artifacts. We introduce Text-in-Image Quality Assessment (TIQA), a task that predicts a scalar quality score that matches human judgments of rendered-text fidelity within cropped text regions. We release two MOS-labeled datasets: TIQA-Crops (10k text crops) and TIQA-Images (1,500 images), spanning 20+ T2I models, including proprietary ones. We also propose ANTIQA, a lightweight method with text-specific biases, and show that it improves correlation with human scores over OCR confidence, VLM judges, and generic NR-IQA metrics by at least 0.05\sim0.05 on TIQA-Crops and 0.08\sim0.08 on TIQA-Images, as measured by PLCC. Finally, we show that TIQA models are valuable in downstream tasks: for example, selecting the best-of-5 generations with ANTIQA improves human-rated text quality by +14%+14\% on average, demonstrating practical value for filtering and reranking in generation pipelines.

View on arXiv
Comments on this paper