ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01933
28
0

Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI Perspective

3 March 2025
Rakshit Aralimatti
Syed Abdul Gaffar Shakhadri
Kruthika KR
Kartik Basavaraj Angadi
ArXivPDFHTML
Abstract

Deploying large scale language models on edge devices faces inherent challenges such as high computational demands, energy consumption, and potential data privacy risks. This paper introduces the Shakti Small Language Models (SLMs) Shakti-100M, Shakti-250M, and Shakti-500M which target these constraints headon. By combining efficient architectures, quantization techniques, and responsible AI principles, the Shakti series enables on-device intelligence for smartphones, smart appliances, IoT systems, and beyond. We provide comprehensive insights into their design philosophy, training pipelines, and benchmark performance on both general tasks (e.g., MMLU, Hellaswag) and specialized domains (healthcare, finance, and legal). Our findings illustrate that compact models, when carefully engineered and fine-tuned, can meet and often exceed expectations in real-world edge-AI scenarios.

View on arXiv
@article{aralimatti2025_2503.01933,
  title={ Fine-Tuning Small Language Models for Domain-Specific AI: An Edge AI Perspective },
  author={ Rakshit Aralimatti and Syed Abdul Gaffar Shakhadri and Kruthika KR and Kartik Basavaraj Angadi },
  journal={arXiv preprint arXiv:2503.01933},
  year={ 2025 }
}
Comments on this paper