ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.01651
38
0

Fine-tuning LLaMA 2 interference: a comparative study of language implementations for optimal efficiency

30 January 2025
Sazzad Hossain
Touhidul Alam Seyam
Avijit Chowdhury
Munis Xamidov
Rajib Ghose
Abhijit Pathak
ArXivPDFHTML
Abstract

This paper presents a comparative study aimed at optimizing Llama2 inference, a critical aspect of machine learning and natural language processing (NLP). We evaluate various programming languages and frameworks, including TensorFlow, PyTorch, Python, Mojo, C++, and Java, analyzing their performance in terms of speed, memory consumption, and ease of implementation through extensive benchmarking. Strengths and limitations of each approach are highlighted, along with proposed optimization strategies for parallel processing and hardware utilization. Furthermore, we investigate the Mojo SDK, a novel framework designed for large language model (LLM) inference on Apple Silicon, benchmarking its performance against implementations in C, C++, Rust, Zig, Go, and Julia. Our experiments, conducted on an Apple M1 Max, demonstrate Mojo SDK's competitive performance, ease of use, and seamless Python compatibility, positioning it as a strong alternative for LLM inference on Apple Silicon. We also discuss broader implications for LLM deployment on resource-constrained hardware and identify potential directions for future research.

View on arXiv
@article{hossain2025_2502.01651,
  title={ Fine-tuning LLaMA 2 interference: a comparative study of language implementations for optimal efficiency },
  author={ Sazzad Hossain and Touhidul Alam Seyam and Avijit Chowdhury and Munis Xamidov and Rajib Ghose and Abhijit Pathak },
  journal={arXiv preprint arXiv:2502.01651},
  year={ 2025 }
}
Comments on this paper