Domain Specific Benchmarks for Evaluating Multimodal Large Language Models
- ELMLRM

Large language models (LLMs) are increasingly being deployed across disciplines due to their advanced reasoning and problem solving capabilities. To measure their effectiveness, various benchmarks have been developed that measure aspects of LLM reasoning, comprehension, and problem-solving. While several surveys address LLM evaluation and benchmarks, a domain-specific analysis remains underexplored in the literature. This paper introduces a taxonomy of seven key disciplines, encompassing various domains and application areas where LLMs are extensively utilized. Additionally, we provide a comprehensive review of LLM benchmarks and survey papers within each domain, highlighting the unique capabilities of LLMs and the challenges faced in their application. Finally, we compile and categorize these benchmarks by domain to create an accessible resource for researchers, aiming to pave the way for advancements toward artificial general intelligence (AGI)
View on arXiv@article{anjum2025_2506.12958, title={ Domain Specific Benchmarks for Evaluating Multimodal Large Language Models }, author={ Khizar Anjum and Muhammad Arbab Arshad and Kadhim Hayawi and Efstathios Polyzos and Asadullah Tariq and Mohamed Adel Serhani and Laiba Batool and Brady Lund and Nishith Reddy Mannuru and Ravi Varma Kumar Bevara and Taslim Mahbub and Muhammad Zeeshan Akram and Sakib Shahriar }, journal={arXiv preprint arXiv:2506.12958}, year={ 2025 } }