ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.03741
  4. Cited By
AskewSGD : An Annealed interval-constrained Optimisation method to train
  Quantized Neural Networks
v1v2 (latest)

AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks

7 November 2022
Louis Leconte
S. Schechtman
Eric Moulines
ArXiv (abs)PDFHTML

Papers citing "AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks"

50 / 64 papers shown
Title
AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets
AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets
Zhaopeng Tu
Xinghao Chen
Pengju Ren
Yunhe Wang
MQ
67
58
0
17 Aug 2022
Accurate Neural Training with 4-bit Matrix Multiplications at Standard
  Formats
Accurate Neural Training with 4-bit Matrix Multiplications at Standard Formats
Brian Chmiel
Ron Banner
Elad Hoffer
Hilla Ben Yaacov
Daniel Soudry
MQ
66
23
0
19 Dec 2021
AdaSTE: An Adaptive Straight-Through Estimator to Train Binary Neural
  Networks
AdaSTE: An Adaptive Straight-Through Estimator to Train Binary Neural Networks
Huu Le
R. Høier
Che-Tsung Lin
Christopher Zach
63
17
0
06 Dec 2021
Bias-Variance Tradeoffs in Single-Sample Binary Gradient Estimators
Bias-Variance Tradeoffs in Single-Sample Binary Gradient Estimators
Alexander Shekhovtsov
MQ
59
5
0
07 Oct 2021
On Constraints in First-Order Optimization: A View from Non-Smooth
  Dynamical Systems
On Constraints in First-Order Optimization: A View from Non-Smooth Dynamical Systems
Michael Muehlebach
Michael I. Jordan
72
18
0
17 Jul 2021
The Bayesian Learning Rule
The Bayesian Learning Rule
Mohammad Emtiyaz Khan
Håvard Rue
BDL
105
81
0
09 Jul 2021
A White Paper on Neural Network Quantization
A White Paper on Neural Network Quantization
Markus Nagel
Marios Fournarakis
Rana Ali Amjad
Yelysei Bondarenko
M. V. Baalen
Tijmen Blankevoort
MQ
92
545
0
15 Jun 2021
In-Hindsight Quantization Range Estimation for Quantized Training
In-Hindsight Quantization Range Estimation for Quantized Training
Marios Fournarakis
Markus Nagel
MQ
34
10
0
10 May 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
208
700
0
24 Jan 2021
A Review of Recent Advances of Binary Neural Networks for Edge Computing
A Review of Recent Advances of Binary Neural Networks for Edge Computing
Wenyu Zhao
Teli Ma
Xuan Gong
Baochang Zhang
David Doermann
MQ
57
24
0
24 Nov 2020
A Statistical Framework for Low-bitwidth Training of Deep Neural
  Networks
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Jianfei Chen
Yujie Gai
Z. Yao
Michael W. Mahoney
Joseph E. Gonzalez
MQ
56
59
0
27 Oct 2020
Rotated Binary Neural Network
Rotated Binary Neural Network
Mingbao Lin
Rongrong Ji
Zi-Han Xu
Baochang Zhang
Yan Wang
Yongjian Wu
Feiyue Huang
Chia-Wen Lin
MQ
63
133
0
28 Sep 2020
Reintroducing Straight-Through Estimators as Principled Methods for
  Stochastic Binary Networks
Reintroducing Straight-Through Estimators as Principled Methods for Stochastic Binary Networks
Alexander Shekhovtsov
Dmitry Molchanov
MQ
45
16
0
11 Jun 2020
LSQ+: Improving low-bit quantization through learnable offsets and
  better initialization
LSQ+: Improving low-bit quantization through learnable offsets and better initialization
Yash Bhalgat
Jinwon Lee
Markus Nagel
Tijmen Blankevoort
Nojun Kwak
MQ
62
222
0
20 Apr 2020
Dithered backprop: A sparse and quantized backpropagation algorithm for
  more efficient deep neural network training
Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training
Simon Wiedemann
Temesgen Mehari
Kevin Kepp
Wojciech Samek
66
18
0
09 Apr 2020
Binary Neural Networks: A Survey
Binary Neural Networks: A Survey
Haotong Qin
Ruihao Gong
Xianglong Liu
Xiao Bai
Jingkuan Song
N. Sebe
MQ
129
470
0
31 Mar 2020
Training Binary Neural Networks with Real-to-Binary Convolutions
Training Binary Neural Networks with Real-to-Binary Convolutions
Brais Martínez
Jing Yang
Adrian Bulat
Georgios Tzimiropoulos
MQ
57
229
0
25 Mar 2020
ReActNet: Towards Precise Binary Neural Network with Generalized
  Activation Functions
ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions
Zechun Liu
Zhiqiang Shen
Marios Savvides
Kwang-Ting Cheng
MQ
143
354
0
07 Mar 2020
Training Binary Neural Networks using the Bayesian Learning Rule
Training Binary Neural Networks using the Bayesian Learning Rule
Xiangming Meng
Roman Bachmann
Mohammad Emtiyaz Khan
BDLMQ
67
42
0
25 Feb 2020
MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy?
MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy?
Joseph Bethge
Christian Bartz
Haojin Yang
Ying-Cong Chen
Christoph Meinel
MQ
70
91
0
16 Jan 2020
Mirror Descent View for Neural Network Quantization
Mirror Descent View for Neural Network Quantization
Thalaiyasingam Ajanthan
Kartik Gupta
Philip Torr
Leonid Sigal
P. Dokania
MQ
47
25
0
18 Oct 2019
Additive Powers-of-Two Quantization: An Efficient Non-uniform
  Discretization for Neural Networks
Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks
Yuhang Li
Xin Dong
Wei Wang
MQ
64
259
0
28 Sep 2019
Regularizing Activation Distribution for Training Binarized Deep
  Networks
Regularizing Activation Distribution for Training Binarized Deep Networks
Ruizhou Ding
Ting-Wu Chin
Z. Liu
Diana Marculescu
MQ
63
149
0
04 Apr 2019
Low-bit Quantization of Neural Networks for Efficient Inference
Low-bit Quantization of Neural Networks for Efficient Inference
Yoni Choukroun
Eli Kravchik
Fan Yang
P. Kisilev
MQ
72
364
0
18 Feb 2019
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Charbel Sakr
Naresh R Shanbhag
MQ
63
43
0
31 Dec 2018
Training Deep Neural Networks with 8-bit Floating Point Numbers
Training Deep Neural Networks with 8-bit Floating Point Numbers
Naigang Wang
Jungwook Choi
D. Brand
Chia-Yu Chen
K. Gopalakrishnan
MQ
65
503
0
19 Dec 2018
Proximal Mean-field for Neural Network Quantization
Proximal Mean-field for Neural Network Quantization
Thalaiyasingam Ajanthan
P. Dokania
Leonid Sigal
Philip Torr
MQ
70
20
0
11 Dec 2018
ProxQuant: Quantized Neural Networks via Proximal Operators
ProxQuant: Quantized Neural Networks via Proximal Operators
Yu Bai
Yu Wang
Edo Liberty
MQ
66
117
0
01 Oct 2018
Probabilistic Binary Neural Networks
Probabilistic Binary Neural Networks
Jorn W. T. Peters
Max Welling
BDLUQCVMQ
60
52
0
10 Sep 2018
A Survey on Methods and Theories of Quantized Neural Networks
A Survey on Methods and Theories of Quantized Neural Networks
Yunhui Guo
MQ
80
234
0
13 Aug 2018
Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved
  Representational Capability and Advanced Training Algorithm
Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm
Zechun Liu
Baoyuan Wu
Wenhan Luo
Xin Yang
Wen Liu
K. Cheng
MQ
89
557
0
01 Aug 2018
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep
  Neural Networks
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
Dongqing Zhang
Jiaolong Yang
Dongqiangzi Ye
G. Hua
MQ
63
703
0
26 Jul 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
42
76
0
17 Jul 2018
SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks
Julian Faraone
Nicholas J. Fraser
Michaela Blott
Philip H. W. Leong
MQ
80
133
0
01 Jul 2018
Scalable Methods for 8-bit Training of Neural Networks
Scalable Methods for 8-bit Training of Neural Networks
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
MQ
84
339
0
25 May 2018
Stochastic subgradient method converges on tame functions
Stochastic subgradient method converges on tame functions
Damek Davis
Dmitriy Drusvyatskiy
Sham Kakade
Jason D. Lee
62
251
0
20 Apr 2018
Loss-aware Weight Quantization of Deep Networks
Loss-aware Weight Quantization of Deep Networks
Lu Hou
James T. Kwok
MQ
82
127
0
23 Feb 2018
Training wide residual networks for deployment using a single bit for
  each weight
Training wide residual networks for deployment using a single bit for each weight
Mark D Mcdonnell
MQ
76
71
0
23 Feb 2018
Model compression via distillation and quantization
Model compression via distillation and quantization
A. Polino
Razvan Pascanu
Dan Alistarh
MQ
86
732
0
15 Feb 2018
From Hashing to CNNs: Training BinaryWeight Networks via Hashing
From Hashing to CNNs: Training BinaryWeight Networks via Hashing
Qinghao Hu
Peisong Wang
Jian Cheng
MQ
62
98
0
08 Feb 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
162
3,141
0
15 Dec 2017
Adaptive Quantization for Deep Neural Network
Adaptive Quantization for Deep Neural Network
Yiren Zhou
Seyed-Mohsen Moosavi-Dezfooli
Ngai-Man Cheung
P. Frossard
MQ
73
184
0
04 Dec 2017
Apprentice: Using Knowledge Distillation Techniques To Improve
  Low-Precision Network Accuracy
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Asit K. Mishra
Debbie Marr
FedML
65
331
0
15 Nov 2017
Minimum Energy Quantized Neural Networks
Minimum Energy Quantized Neural Networks
Bert Moons
Koen Goetschalckx
Nick Van Berckelaer
Marian Verhelst
MQ
56
123
0
01 Nov 2017
WRPN: Wide Reduced-Precision Networks
WRPN: Wide Reduced-Precision Networks
Asit K. Mishra
Eriko Nurvitadhi
Jeffrey J. Cook
Debbie Marr
MQ
80
267
0
04 Sep 2017
Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM
Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM
Cong Leng
Hao Li
Shenghuo Zhu
Rong Jin
MQ
63
288
0
24 Jul 2017
Model compression as constrained optimization, with application to
  neural nets. Part II: quantization
Model compression as constrained optimization, with application to neural nets. Part II: quantization
M. A. Carreira-Perpiñán
Yerlan Idelbayev
MQ
53
37
0
13 Jul 2017
Training Quantized Nets: A Deeper Understanding
Training Quantized Nets: A Deeper Understanding
Hao Li
Soham De
Zheng Xu
Christoph Studer
H. Samet
Tom Goldstein
MQ
55
211
0
07 Jun 2017
The High-Dimensional Geometry of Binary Neural Networks
The High-Dimensional Geometry of Binary Neural Networks
Alexander G. Anderson
C. P. Berg
MQ
65
76
0
19 May 2017
Trained Ternary Quantization
Trained Ternary Quantization
Chenzhuo Zhu
Song Han
Huizi Mao
W. Dally
MQ
139
1,036
0
04 Dec 2016
12
Next