This paper presents a comparative study aimed at optimizing Llama2 inference, a critical aspect of machine learning and natural language processing (NLP). We evaluate various programming languages and frameworks, including TensorFlow, PyTorch, Python, Mojo, C++, and Java, analyzing their performance in terms of speed, memory consumption, and ease of implementation through extensive benchmarking. Strengths and limitations of each approach are highlighted, along with proposed optimization strategies for parallel processing and hardware utilization. Furthermore, we investigate the Mojo SDK, a novel framework designed for large language model (LLM) inference on Apple Silicon, benchmarking its performance against implementations in C, C++, Rust, Zig, Go, and Julia. Our experiments, conducted on an Apple M1 Max, demonstrate Mojo SDK's competitive performance, ease of use, and seamless Python compatibility, positioning it as a strong alternative for LLM inference on Apple Silicon. We also discuss broader implications for LLM deployment on resource-constrained hardware and identify potential directions for future research.
View on arXiv@article{hossain2025_2502.01651, title={ Fine-tuning LLaMA 2 interference: a comparative study of language implementations for optimal efficiency }, author={ Sazzad Hossain and Touhidul Alam Seyam and Avijit Chowdhury and Munis Xamidov and Rajib Ghose and Abhijit Pathak }, journal={arXiv preprint arXiv:2502.01651}, year={ 2025 } }