ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.03090
  4. Cited By
BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization

BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization

8 February 2020
Milovs Nikolić
G. B. Hacene
Ciaran Bannon
Alberto Delmas Lascorz
Matthieu Courbariaux
Yoshua Bengio
Vincent Gripon
Andreas Moshovos
    MQ
ArXivPDFHTML

Papers citing "BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization"

10 / 10 papers shown
Title
AdaQAT: Adaptive Bit-Width Quantization-Aware Training
AdaQAT: Adaptive Bit-Width Quantization-Aware Training
Cédric Gernigon
Silviu-Ioan Filip
Olivier Sentieys
Clément Coggiola
Mickael Bruno
23
2
0
22 Apr 2024
CNN-Based Equalization for Communications: Achieving Gigabit Throughput
  with a Flexible FPGA Hardware Architecture
CNN-Based Equalization for Communications: Achieving Gigabit Throughput with a Flexible FPGA Hardware Architecture
Jonas Ney
C. Füllner
V. Lauinger
Laurent Schmalen
Sebastian Randel
Norbert Wehn
29
0
0
22 Apr 2024
Free Bits: Latency Optimization of Mixed-Precision Quantized Neural
  Networks on the Edge
Free Bits: Latency Optimization of Mixed-Precision Quantized Neural Networks on the Edge
Georg Rutishauser
Francesco Conti
Luca Benini
MQ
28
5
0
06 Jul 2023
Unsupervised ANN-Based Equalizer and Its Trainable FPGA Implementation
Unsupervised ANN-Based Equalizer and Its Trainable FPGA Implementation
Jonas Ney
V. Lauinger
Laurent Schmalen
Norbert Wehn
22
5
0
14 Apr 2023
Efficient and Effective Methods for Mixed Precision Neural Network
  Quantization for Faster, Energy-efficient Inference
Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Deepika Bablani
J. McKinstry
S. K. Esser
R. Appuswamy
D. Modha
MQ
23
4
0
30 Jan 2023
FullPack: Full Vector Utilization for Sub-Byte Quantized Inference on
  General Purpose CPUs
FullPack: Full Vector Utilization for Sub-Byte Quantized Inference on General Purpose CPUs
Hossein Katebi
Navidreza Asadi
M. Goudarzi
MQ
25
0
0
13 Nov 2022
A Silicon Photonic Accelerator for Convolutional Neural Networks with
  Heterogeneous Quantization
A Silicon Photonic Accelerator for Convolutional Neural Networks with Heterogeneous Quantization
Febin P. Sunny
Mahdi Nikdast
S. Pasricha
MQ
38
16
0
17 May 2022
Quantization and Deployment of Deep Neural Networks on Microcontrollers
Quantization and Deployment of Deep Neural Networks on Microcontrollers
Pierre-Emmanuel Novac
G. B. Hacene
Alain Pegatoquet
Benoit Miramond
Vincent Gripon
MQ
20
116
0
27 May 2021
ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural
  Networks
ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks
Ahmed T. Elthakeb
Prannoy Pilligundla
Fatemehsadat Mireshghallah
Amir Yazdanbakhsh
H. Esmaeilzadeh
MQ
55
68
0
05 Nov 2018
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
1