ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.07520
12
8

Gradient ℓ1\ell_1ℓ1​ Regularization for Quantization Robustness

18 February 2020
Milad Alizadeh
Arash Behboodi
M. V. Baalen
Christos Louizos
Tijmen Blankevoort
Max Welling
    MQ
ArXivPDFHTML
Abstract

We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for "on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a ℓ∞\ell_\inftyℓ∞​-bounded perturbation, the first-order term in the loss expansion can be regularized using the ℓ1\ell_1ℓ1​-norm of gradients. We experimentally validate the effectiveness of our regularization scheme on different architectures on CIFAR-10 and ImageNet datasets.

View on arXiv
Comments on this paper