IndicRAGSuite: Large-Scale Datasets and a Benchmark for Indian Language RAG Systems
- RALM

Retrieval-Augmented Generation (RAG) systems enable language models to access relevant information and generate accurate, well-grounded, and contextually informed responses. However, for Indian languages, the development of high-quality RAG systems is hindered by the lack of two critical resources: (1) evaluation benchmarks for retrieval and generation tasks, and (2) large-scale training datasets for multilingual retrieval. Most existing benchmarks and datasets are centered around English or high-resource languages, making it difficult to extend RAG capabilities to the diverse linguistic landscape of India. To address the lack of evaluation benchmarks, we create IndicMSMarco, a multilingual benchmark for evaluating retrieval quality and response generation in 13 Indian languages, created via manual translation of 1000 diverse queries from MS MARCO-dev set. To address the need for training data, we build a large-scale dataset of (question, answer, relevant passage) tuples derived from the Wikipedias of 19 Indian languages using state-of-the-art LLMs. Additionally, we include translated versions of the original MS MARCO dataset to further enrich the training data and ensure alignment with real-world information-seeking tasks. Resources are available here:this https URL
View on arXiv@article{prasanjith2025_2506.01615, title={ IndicRAGSuite: Large-Scale Datasets and a Benchmark for Indian Language RAG Systems }, author={ Pasunuti Prasanjith and Prathmesh B More and Anoop Kunchukuttan and Raj Dabre }, journal={arXiv preprint arXiv:2506.01615}, year={ 2025 } }