11

AyurParam: A State-of-the-Art Bilingual Language Model for Ayurveda

Mohd Nauman
Sravan Gvm
Vijay Devane
Shyam Pawar
Viraj Thakur
Kundeshwar Pundalik
Piyush Sawarkar
Rohit Saluja
Maunendra Desarkar
Ganesh Ramakrishnan
Main:10 Pages
3 Figures
Bibliography:2 Pages
3 Tables
Abstract

Current large language models excel at broad, general-purpose tasks, but consistently underperform when exposed to highly specialized domains that require deep cultural, linguistic, and subject-matter expertise. In particular, traditional medical systems such as Ayurveda embody centuries of nuanced textual and clinical knowledge that mainstream LLMs fail to accurately interpret or apply. We introduce AyurParam-2.9B, a domain-specialized, bilingual language model fine-tuned from Param-1-2.9B using an extensive, expertly curated Ayurveda dataset spanning classical texts and clinical guidance. AyurParam's dataset incorporates context-aware, reasoning, and objective-style Q&A in both English and Hindi, with rigorous annotation protocols for factual precision and instructional clarity. Benchmarked on BhashaBench-Ayur, AyurParam not only surpasses all open-source instruction-tuned models in its size class (1.5--3B parameters), but also demonstrates competitive or superior performance compared to much larger models. The results from AyurParam highlight the necessity for authentic domain adaptation and high-quality supervision in delivering reliable, culturally congruent AI for specialized medical knowledge.

View on arXiv
Comments on this paper