ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.10615
14
295

SqueezeNext: Hardware-Aware Neural Network Design

23 March 2018
A. Gholami
K. Kwon
Bichen Wu
Zizheng Tai
Xiangyu Yue
Peter H. Jin
Sicheng Zhao
Kurt Keutzer
ArXivPDFHTML
Abstract

One of the main barriers for deploying neural networks on embedded systems has been large memory and power consumption of existing neural networks. In this work, we introduce SqueezeNext, a new family of neural network architectures whose design was guided by considering previous architectures such as SqueezeNet, as well as by simulation results on a neural network accelerator. This new network is able to match AlexNet's accuracy on the ImageNet benchmark with 112×112\times112× fewer parameters, and one of its deeper variants is able to achieve VGG-19 accuracy with only 4.4 Million parameters, (31×31\times31× smaller than VGG-19). SqueezeNext also achieves better top-5 classification accuracy with 1.3×1.3\times1.3× fewer parameters as compared to MobileNet, but avoids using depthwise-separable convolutions that are inefficient on some mobile processor platforms. This wide range of accuracy gives the user the ability to make speed-accuracy tradeoffs, depending on the available resources on the target hardware. Using hardware simulation results for power and inference speed on an embedded system has guided us to design variations of the baseline model that are 2.59×2.59\times2.59×/8.26×8.26\times8.26× faster and 2.25×2.25\times2.25×/7.5×7.5\times7.5× more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.

View on arXiv
Comments on this paper