ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.07597
34
183

MLPerf Tiny Benchmark

14 June 2021
Colby R. Banbury
Vijay Janapa Reddi
P. Torelli
J. Holleman
Nat Jeffries
C. Király
Pietro Montino
David Kanter
S. Ahmed
Danilo Pau
Urmish Thakker
Antonio Torrini
Pete Warden
Jay Cordaro
G. D. Guglielmo
Javier Mauricio Duarte
Stephen Gibellini
Videet Parekh
Honson Tran
Nhan Tran
Niu Wenxu
Xu Xuesong
    VLM
ArXivPDFHTML
Abstract

Advancements in ultra-low-power tiny machine learning (TinyML) systems promise to unlock an entirely new class of smart applications. However, continued progress is limited by the lack of a widely accepted and easily reproducible benchmark for these systems. To meet this need, we present MLPerf Tiny, the first industry-standard benchmark suite for ultra-low-power tiny machine learning systems. The benchmark suite is the collaborative effort of more than 50 organizations from industry and academia and reflects the needs of the community. MLPerf Tiny measures the accuracy, latency, and energy of machine learning inference to properly evaluate the tradeoffs between systems. Additionally, MLPerf Tiny implements a modular design that enables benchmark submitters to show the benefits of their product, regardless of where it falls on the ML deployment stack, in a fair and reproducible manner. The suite features four benchmarks: keyword spotting, visual wake words, image classification, and anomaly detection.

View on arXiv
Comments on this paper