23
4

A Survey of Multimodal Retrieval-Augmented Generation

Abstract

Multimodal Retrieval-Augmented Generation (MRAG) enhances large language models (LLMs) by integrating multimodal data (text, images, videos) into retrieval and generation processes, overcoming the limitations of text-only Retrieval-Augmented Generation (RAG). While RAG improves response accuracy by incorporating external textual knowledge, MRAG extends this framework to include multimodal retrieval and generation, leveraging contextual information from diverse data types. This approach reduces hallucinations and enhances question-answering systems by grounding responses in factual, multimodal knowledge. Recent studies show MRAG outperforms traditional RAG, especially in scenarios requiring both visual and textual understanding. This survey reviews MRAG's essential components, datasets, evaluation methods, and limitations, providing insights into its construction and improvement. It also identifies challenges and future research directions, highlighting MRAG's potential to revolutionize multimodal information retrieval and generation. By offering a comprehensive perspective, this work encourages further exploration into this promising paradigm.

View on arXiv
@article{mei2025_2504.08748,
  title={ A Survey of Multimodal Retrieval-Augmented Generation },
  author={ Lang Mei and Siyu Mo and Zhihan Yang and Chong Chen },
  journal={arXiv preprint arXiv:2504.08748},
  year={ 2025 }
}
Comments on this paper