ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.04354
  4. Cited By
Activation Density based Mixed-Precision Quantization for Energy
  Efficient Neural Networks

Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks

12 January 2021
Karina Vasquez
Yeshwanth Venkatesha
Abhiroop Bhattacharjee
Abhishek Moitra
Priyadarshini Panda
    MQ
ArXivPDFHTML

Papers citing "Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks"

2 / 2 papers shown
Title
Two Sparsities Are Better Than One: Unlocking the Performance Benefits
  of Sparse-Sparse Networks
Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
Kevin Lee Hunter
Lawrence Spracklen
Subutai Ahmad
23
20
0
27 Dec 2021
BMPQ: Bit-Gradient Sensitivity Driven Mixed-Precision Quantization of
  DNNs from Scratch
BMPQ: Bit-Gradient Sensitivity Driven Mixed-Precision Quantization of DNNs from Scratch
Souvik Kundu
Shikai Wang
Qirui Sun
P. Beerel
Massoud Pedram
MQ
29
18
0
24 Dec 2021
1