ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.01875
  4. Cited By
Relaxed Quantization for Discretized Neural Networks

Relaxed Quantization for Discretized Neural Networks

3 October 2018
Christos Louizos
M. Reisser
Tijmen Blankevoort
E. Gavves
Max Welling
    MQ
ArXivPDFHTML

Papers citing "Relaxed Quantization for Discretized Neural Networks"

38 / 88 papers shown
Title
Single-path Bit Sharing for Automatic Loss-aware Model Compression
Single-path Bit Sharing for Automatic Loss-aware Model Compression
Jing Liu
Bohan Zhuang
Peng Chen
Chunhua Shen
Jianfei Cai
Mingkui Tan
MQ
15
7
0
13 Jan 2021
Recurrence of Optimum for Training Weight and Activation Quantized
  Networks
Recurrence of Optimum for Training Weight and Activation Quantized Networks
Ziang Long
Penghang Yin
Jack Xin
MQ
35
3
0
10 Dec 2020
Maximin Optimization for Binary Regression
Maximin Optimization for Binary Regression
Nisan Chiprut
Amir Globerson
A. Wiesel
MQ
11
0
0
10 Oct 2020
One Weight Bitwidth to Rule Them All
One Weight Bitwidth to Rule Them All
Ting-Wu Chin
P. Chuang
Vikas Chandra
Diana Marculescu
MQ
28
25
0
22 Aug 2020
FATNN: Fast and Accurate Ternary Neural Networks
FATNN: Fast and Accurate Ternary Neural Networks
Peng Chen
Bohan Zhuang
Chunhua Shen
MQ
6
15
0
12 Aug 2020
Differentiable Joint Pruning and Quantization for Hardware Efficiency
Differentiable Joint Pruning and Quantization for Hardware Efficiency
Ying Wang
Yadong Lu
Tijmen Blankevoort
MQ
22
71
0
20 Jul 2020
DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural
  Networks
DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural Networks
Hassan Dbouk
Hetul Sanghvi
M. Mehendale
Naresh R Shanbhag
MQ
19
9
0
19 Jul 2020
AUSN: Approximately Uniform Quantization by Adaptively Superimposing
  Non-uniform Distribution for Deep Neural Networks
AUSN: Approximately Uniform Quantization by Adaptively Superimposing Non-uniform Distribution for Deep Neural Networks
Fangxin Liu
Wenbo Zhao
Yanzhi Wang
Changzhi Dai
Li Jiang
MQ
25
3
0
08 Jul 2020
Accelerating Neural Network Inference by Overflow Aware Quantization
Accelerating Neural Network Inference by Overflow Aware Quantization
Hongwei Xie
Shuo Zhang
Huanghao Ding
Yafei Song
Baitao Shao
Conggang Hu
Lingyi Cai
Mingyang Li
MQ
11
0
0
27 May 2020
Bayesian Bits: Unifying Quantization and Pruning
Bayesian Bits: Unifying Quantization and Pruning
M. V. Baalen
Christos Louizos
Markus Nagel
Rana Ali Amjad
Ying Wang
Tijmen Blankevoort
Max Welling
MQ
16
114
0
14 May 2020
Up or Down? Adaptive Rounding for Post-Training Quantization
Up or Down? Adaptive Rounding for Post-Training Quantization
Markus Nagel
Rana Ali Amjad
M. V. Baalen
Christos Louizos
Tijmen Blankevoort
MQ
10
553
0
22 Apr 2020
A Data and Compute Efficient Design for Limited-Resources Deep Learning
A Data and Compute Efficient Design for Limited-Resources Deep Learning
Mirgahney Mohamed
Gabriele Cesa
Taco S. Cohen
Max Welling
MedIm
29
18
0
21 Apr 2020
LSQ+: Improving low-bit quantization through learnable offsets and
  better initialization
LSQ+: Improving low-bit quantization through learnable offsets and better initialization
Yash Bhalgat
Jinwon Lee
Markus Nagel
Tijmen Blankevoort
Nojun Kwak
MQ
20
212
0
20 Apr 2020
Role-Wise Data Augmentation for Knowledge Distillation
Role-Wise Data Augmentation for Knowledge Distillation
Jie Fu
Xue Geng
Zhijian Duan
Bohan Zhuang
Xingdi Yuan
Adam Trischler
Jie Lin
C. Pal
Hao Dong
25
15
0
19 Apr 2020
Generative Low-bitwidth Data Free Quantization
Generative Low-bitwidth Data Free Quantization
Shoukai Xu
Haokun Li
Bohan Zhuang
Jing Liu
Jingyun Liang
Chuangrun Liang
Mingkui Tan
MQ
13
126
0
07 Mar 2020
Propagating Asymptotic-Estimated Gradients for Low Bitwidth Quantized
  Neural Networks
Propagating Asymptotic-Estimated Gradients for Low Bitwidth Quantized Neural Networks
Jun Chen
Yong Liu
Hao Zhang
Shengnan Hou
Jian Yang
MQ
25
7
0
04 Mar 2020
Training Binary Neural Networks using the Bayesian Learning Rule
Training Binary Neural Networks using the Bayesian Learning Rule
Xiangming Meng
Roman Bachmann
Mohammad Emtiyaz Khan
BDL
MQ
30
40
0
25 Feb 2020
Post-training Quantization with Multiple Points: Mixed Precision without
  Mixed Precision
Post-training Quantization with Multiple Points: Mixed Precision without Mixed Precision
Xingchao Liu
Mao Ye
Dengyong Zhou
Qiang Liu
MQ
16
42
0
20 Feb 2020
Gradient $\ell_1$ Regularization for Quantization Robustness
Gradient ℓ1\ell_1ℓ1​ Regularization for Quantization Robustness
Milad Alizadeh
Arash Behboodi
M. V. Baalen
Christos Louizos
Tijmen Blankevoort
Max Welling
MQ
12
8
0
18 Feb 2020
Automatic Pruning for Quantized Neural Networks
Automatic Pruning for Quantized Neural Networks
Luis Guerra
Bohan Zhuang
Ian Reid
Tom Drummond
MQ
14
21
0
03 Feb 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
34
47
0
07 Jan 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
34
73
0
07 Jan 2020
Adaptive Loss-aware Quantization for Multi-bit Networks
Adaptive Loss-aware Quantization for Multi-bit Networks
Zhongnan Qu
Zimu Zhou
Yun Cheng
Lothar Thiele
MQ
36
53
0
18 Dec 2019
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
J. H. Lee
Jihun Yun
Sung Ju Hwang
Eunho Yang
MQ
15
0
0
29 Nov 2019
Ternary MobileNets via Per-Layer Hybrid Filter Banks
Ternary MobileNets via Per-Layer Hybrid Filter Banks
Dibakar Gope
Jesse G. Beu
Urmish Thakker
Matthew Mattina
MQ
32
15
0
04 Nov 2019
Mirror Descent View for Neural Network Quantization
Mirror Descent View for Neural Network Quantization
Thalaiyasingam Ajanthan
Kartik Gupta
Philip Torr
Richard I. Hartley
P. Dokania
MQ
16
23
0
18 Oct 2019
AI Benchmark: All About Deep Learning on Smartphones in 2019
AI Benchmark: All About Deep Learning on Smartphones in 2019
Andrey D. Ignatov
Radu Timofte
Andrei Kulik
Seungsoo Yang
Ke Wang
Felix Baum
Max Wu
Lirong Xu
Luc Van Gool
ELM
21
218
0
15 Oct 2019
Bit Efficient Quantization for Deep Neural Networks
Bit Efficient Quantization for Deep Neural Networks
Prateeth Nayak
David C. Zhang
S. Chai
MQ
33
43
0
07 Oct 2019
QuaRL: Quantization for Fast and Environmentally Sustainable
  Reinforcement Learning
QuaRL: Quantization for Fast and Environmentally Sustainable Reinforcement Learning
Srivatsan Krishnan
Maximilian Lam
Sharad Chitlangia
Zishen Wan
Gabriel Barth-Maron
Aleksandra Faust
Vijay Janapa Reddi
MQ
21
22
0
02 Oct 2019
Effective Training of Convolutional Neural Networks with Low-bitwidth
  Weights and Activations
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations
Bohan Zhuang
Jing Liu
Mingkui Tan
Lingqiao Liu
Ian Reid
Chunhua Shen
MQ
29
44
0
10 Aug 2019
Scalable Model Compression by Entropy Penalized Reparameterization
Scalable Model Compression by Entropy Penalized Reparameterization
Deniz Oktay
Johannes Ballé
Saurabh Singh
Abhinav Shrivastava
19
42
0
15 Jun 2019
Data-Free Quantization Through Weight Equalization and Bias Correction
Data-Free Quantization Through Weight Equalization and Bias Correction
Markus Nagel
M. V. Baalen
Tijmen Blankevoort
Max Welling
MQ
19
500
0
11 Jun 2019
Instant Quantization of Neural Networks using Monte Carlo Methods
Instant Quantization of Neural Networks using Monte Carlo Methods
Gonçalo Mordido
Matthijs Van Keirsbilck
A. Keller
MQ
27
9
0
29 May 2019
Mixed Precision DNNs: All you need is a good parametrization
Mixed Precision DNNs: All you need is a good parametrization
Stefan Uhlich
Lukas Mauch
Fabien Cardinaux
K. Yoshiyama
Javier Alonso García
Stephen Tiedemann
Thomas Kemp
Akira Nakamura
MQ
27
38
0
27 May 2019
Dream Distillation: A Data-Independent Model Compression Framework
Dream Distillation: A Data-Independent Model Compression Framework
Kartikeya Bhardwaj
Naveen Suda
R. Marculescu
DD
19
54
0
17 May 2019
Training Quantized Neural Networks with a Full-precision Auxiliary
  Module
Training Quantized Neural Networks with a Full-precision Auxiliary Module
Bohan Zhuang
Lingqiao Liu
Mingkui Tan
Chunhua Shen
Ian Reid
MQ
32
62
0
27 Mar 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
Variational Optimization
Variational Optimization
J. Staines
David Barber
DRL
65
4
0
18 Dec 2012
Previous
12