20
0

Entity Image and Mixed-Modal Image Retrieval Datasets

Main:8 Pages
6 Figures
Bibliography:1 Pages
7 Tables
Abstract

Despite advances in multimodal learning, challenging benchmarks for mixed-modal image retrieval that combines visual and textual information are lacking. This paper introduces a novel benchmark to rigorously evaluate image retrieval that demands deep cross-modal contextual understanding. We present two new datasets: the Entity Image Dataset (EI), providing canonical images for Wikipedia entities, and the Mixed-Modal Image Retrieval Dataset (MMIR), derived from the WIT dataset. The MMIR benchmark features two challenging query types requiring models to ground textual descriptions in the context of provided visual entities: single entity-image queries (one entity image with descriptive text) and multi-entity-image queries (multiple entity images with relational text). We empirically validate the benchmark's utility as both a training corpus and an evaluation set for mixed-modal retrieval. The quality of both datasets is further affirmed through crowd-sourced human annotations. The datasets are accessible through the GitHub page:this https URL.

View on arXiv
@article{blaga2025_2506.02291,
  title={ Entity Image and Mixed-Modal Image Retrieval Datasets },
  author={ Cristian-Ioan Blaga and Paul Suganthan and Sahil Dua and Krishna Srinivasan and Enrique Alfonseca and Peter Dornbach and Tom Duerig and Imed Zitouni and Zhe Dong },
  journal={arXiv preprint arXiv:2506.02291},
  year={ 2025 }
}
Comments on this paper