47
1

Visualized Text-to-Image Retrieval

Main:4 Pages
10 Figures
Bibliography:3 Pages
8 Tables
Appendix:8 Pages
Abstract

We propose Visualize-then-Retrieve (VisRet), a new paradigm for Text-to-Image (T2I) retrieval that mitigates the limitations of cross-modal similarity alignment of existing multi-modal embeddings. VisRet first projects textual queries into the image modality via T2I generation. Then, it performs retrieval within the image modality to bypass the weaknesses of cross-modal retrievers in recognizing subtle visual-spatial features. Experiments on three knowledge-intensive T2I retrieval benchmarks, including a newly introduced multi-entity benchmark, demonstrate that VisRet consistently improves T2I retrieval by 24.5% to 32.7% NDCG@10 across different embedding models. VisRet also significantly benefits downstream visual question answering accuracy when used in retrieval-augmented generation pipelines. The method is plug-and-play and compatible with off-the-shelf retrievers, making it an effective module for knowledge-intensive multi-modal systems. Our code and the new benchmark are publicly available atthis https URL.

View on arXiv
@article{wu2025_2505.20291,
  title={ Visualized Text-to-Image Retrieval },
  author={ Di Wu and Yixin Wan and Kai-Wei Chang },
  journal={arXiv preprint arXiv:2505.20291},
  year={ 2025 }
}
Comments on this paper