ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.01242
49
9
v1v2v3v4 (latest)

Nonconvex Regularization for Network Slimming:Compressing CNNs Even More

3 October 2020
Kevin Bui
Fredrick Park
Shuai Zhang
Y. Qi
Jack Xin
ArXiv (abs)PDFHTML
Abstract

In the last decade, convolutional neural networks (CNNs) have evolved to become the dominant models for various computer vision tasks, but they cannot be deployed in low-memory devices due to its high memory requirement and computational cost. One popular, straightforward approach to compressing CNNs is network slimming, which imposes an ℓ1\ell_1ℓ1​ penalty on the channel-associated scaling factors in the batch normalization layers during training. In this way, channels with low scaling factors are identified to be insignificant and are pruned in the models. In this paper, we propose replacing the ℓ1\ell_1ℓ1​ penalty with the ℓp\ell_pℓp​ and transformed ℓ1\ell_1ℓ1​ (Tℓ1\ell_1ℓ1​) penalties since these nonconvex penalties outperformed ℓ1\ell_1ℓ1​ in yielding sparser satisfactory solutions in various compressed sensing problems. In our numerical experiments, we demonstrate network slimming with ℓp\ell_pℓp​ and Tℓ1\ell_1ℓ1​ penalties on VGGNet and Densenet trained on CIFAR 10/100. The results demonstrate that the nonconvex penalties compress CNNs better than ℓ1\ell_1ℓ1​. In addition, Tℓ1\ell_1ℓ1​ preserves the model accuracy after channel pruning, and ℓ1/2,3/4\ell_{1/2, 3/4}ℓ1/2,3/4​ yield compressed models with similar accuracies as ℓ1\ell_1ℓ1​ after retraining.

View on arXiv
Comments on this paper