ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.00771
13
14

A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off

3 June 2019
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
    MQ
ArXivPDFHTML
Abstract

Reducing the precision of weights and activation functions in neural network training, with minimal impact on performance, is essential for the deployment of these models in resource-constrained environments. We apply mean-field techniques to networks with quantized activations in order to evaluate the degree to which quantization degrades signal propagation at initialization. We derive initialization schemes which maximize signal propagation in such networks and suggest why this is helpful for generalization. Building on these results, we obtain a closed form implicit equation for Lmax⁡L_{\max}Lmax​, the maximal trainable depth (and hence model capacity), given NNN, the number of quantization levels in the activation function. Solving this equation numerically, we obtain asymptotically: Lmax⁡∝N1.82L_{\max}\propto N^{1.82}Lmax​∝N1.82.

View on arXiv
Comments on this paper