ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.02328
55
23
v1v2v3v4 (latest)

MLPerf Mobile Inference Benchmark

3 December 2020
Vijay Janapa Reddi
David Kanter
Peter Mattson
Jared Duke
Thai Nguyen
Ramesh Chukka
Kenneth Shiring
Koan-Sin Tan
M. Charlebois
William Chou
Mostafa El-Khamy
Jungwook Hong
T. S. John
Cindy Trinh
Michael H. C. Buch
Mark Mazumder
Relja Markovic
Thomas Atta-fosu
Fatih Çakir
Masoud Charkhabi
Xiaodong Chen
Cheng-Ming Chiang
Dave Dexter
Terry Heo
ArXiv (abs)PDFHTML
Abstract

MLPerf Mobile is the first industry-standard open-source mobile benchmark developed by industry members and academic researchers to allow performance/accuracy evaluation of mobile devices with different AI chips and software stacks. The benchmark draws from the expertise of leading mobile-SoC vendors, ML-framework providers, and model producers. In this paper, we motivate the drive to demystify mobile-AI performance and present MLPerf Mobile's design considerations, architecture, and implementation. The benchmark comprises a suite of models that operate under standard models, data sets, quality metrics, and run rules. For the first iteration, we developed an app to provide an "out-of-the-box" inference-performance benchmark for computer vision and natural-language processing on mobile devices. MLPerf Mobile can serve as a framework for integrating future models, for customizing quality-target thresholds to evaluate system performance, for comparing software frameworks, and for assessing heterogeneous-hardware capabilities for machine learning, all fairly and faithfully with fully reproducible results.

View on arXiv
Comments on this paper