ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.09058
19
17

Layer-Wise Data-Free CNN Compression

18 November 2020
Maxwell Horton
Yanzi Jin
Ali Farhadi
Mohammad Rastegari
    MQ
ArXivPDFHTML
Abstract

We present a computationally efficient method for compressing a trained neural network without using real data. We break the problem of data-free network compression into independent layer-wise compressions. We show how to efficiently generate layer-wise training data using only a pretrained network. We use this data to perform independent layer-wise compressions on the pretrained network. We also show how to precondition the network to improve the accuracy of our layer-wise compression method. We present results for layer-wise compression using quantization and pruning. When quantizing, we compress with higher accuracy than related works while using orders of magnitude less compute. When compressing MobileNetV2 and evaluating on ImageNet, our method outperforms existing methods for quantization at all bit-widths, achieving a +0.34%+0.34\%+0.34% improvement in 888-bit quantization, and a stronger improvement at lower bit-widths (up to a +28.50%+28.50\%+28.50% improvement at 555 bits). When pruning, we outperform baselines of a similar compute envelope, achieving 1.51.51.5 times the sparsity rate at the same accuracy. We also show how to combine our efficient method with high-compute generative methods to improve upon their results.

View on arXiv
Comments on this paper