34

ImagenWorld: Stress-Testing Image Generation Models with Explainable Human Evaluation on Open-ended Real-World Tasks

Samin Mahdizadeh Sani
Max Ku
Nima Jamali
Matina Mahdizadeh Sani
Paria Khoshtab
Wei-Chieh Sun
Parnian Fazel
Zhi Rui Tam
Thomas Chong
Edisy Kin Wai Chan
Donald Wai Tong Tsang
Chiao-Wei Hsu
Ting Wai Lam
Ho Yin Sam Ng
Chiafeng Chu
Chak-Wing Mak
Keming Wu
Hiu Tung Wong
Yik Chun Ho
Chi Ruan
Zhuofeng Li
I-Sheng Fang
Shih-Ying Yeh
Ho Kei Cheng
Ping Nie
Wenhu Chen
Main:10 Pages
19 Figures
Bibliography:5 Pages
9 Tables
Appendix:16 Pages
Abstract

Advances in diffusion, autoregressive, and hybrid models have enabled high-quality image synthesis for tasks such as text-to-image, editing, and reference-guided composition. Yet, existing benchmarks remain limited, either focus on isolated tasks, cover only narrow domains, or provide opaque scores without explaining failure modes. We introduce \textbf{ImagenWorld}, a benchmark of 3.6K condition sets spanning six core tasks (generation and editing, with single or multiple references) and six topical domains (artworks, photorealistic images, information graphics, textual graphics, computer graphics, and screenshots). The benchmark is supported by 20K fine-grained human annotations and an explainable evaluation schema that tags localized object-level and segment-level errors, complementing automated VLM-based metrics. Our large-scale evaluation of 14 models yields several insights: (1) models typically struggle more in editing tasks than in generation tasks, especially in local edits. (2) models excel in artistic and photorealistic settings but struggle with symbolic and text-heavy domains such as screenshots and information graphics. (3) closed-source systems lead overall, while targeted data curation (e.g., Qwen-Image) narrows the gap in text-heavy cases. (4) modern VLM-based metrics achieve Kendall accuracies up to 0.79, approximating human ranking, but fall short of fine-grained, explainable error attribution. ImagenWorld provides both a rigorous benchmark and a diagnostic tool to advance robust image generation.

View on arXiv
Comments on this paper