102
1

MNN-LLM: A Generic Inference Engine for Fast Large Language Model Deployment on Mobile Devices

Main:6 Pages
5 Figures
Bibliography:1 Pages
3 Tables
Abstract

Large language models (LLMs) have demonstrated exceptional performance across a variety of tasks. However, their substantial scale leads to significant computational resource consumption during inference, resulting in high costs. Consequently, edge device inference presents a promising solution. The primary challenges of edge inference include memory usage and inference speed. This paper introduces MNN-LLM, a framework specifically designed to accelerate the deployment of large language models on mobile devices. MNN-LLM addresses the runtime characteristics of LLMs through model quantization and DRAM-Flash hybrid storage, effectively reducing memory usage. It rearranges weights and inputs based on mobile CPU instruction sets and GPU characteristics while employing strategies such as multicore load balancing, mixed-precision floating-point operations, and geometric computations to enhance performance. Notably, MNN-LLM achieves up to a 8.6x speed increase compared to current mainstream LLM-specific frameworks.

View on arXiv
@article{wang2025_2506.10443,
  title={ MNN-LLM: A Generic Inference Engine for Fast Large Language Model Deployment on Mobile Devices },
  author={ Zhaode Wang and Jingbang Yang and Xinyu Qian and Shiwen Xing and Xiaotang Jiang and Chengfei Lv and Shengyu Zhang },
  journal={arXiv preprint arXiv:2506.10443},
  year={ 2025 }
}
Comments on this paper