ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03046
  4. Cited By
Focused Quantization for Sparse CNNs

Focused Quantization for Sparse CNNs

7 March 2019
Yiren Zhao
Xitong Gao
Daniel Bates
Robert D. Mullins
Chengzhong Xu
    MQ
ArXivPDFHTML

Papers citing "Focused Quantization for Sparse CNNs"

13 / 13 papers shown
Title
MinUn: Accurate ML Inference on Microcontrollers
MinUn: Accurate ML Inference on Microcontrollers
Shikhar Jaiswal
R. Goli
Aayan Kumar
Vivek Seshadri
Rahul Sharma
26
2
0
29 Oct 2022
Training Deep Neural Networks with Joint Quantization and Pruning of
  Weights and Activations
Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations
Xinyu Zhang
Ian Colbert
Ken Kreutz-Delgado
Srinjoy Das
MQ
32
11
0
15 Oct 2021
Rapid Model Architecture Adaption for Meta-Learning
Rapid Model Architecture Adaption for Meta-Learning
Yiren Zhao
Xitong Gao
Ilia Shumailov
Nicolò Fusi
Robert D. Mullins
33
4
0
10 Sep 2021
Dynamic Probabilistic Pruning: A general framework for
  hardware-constrained pruning at different granularities
Dynamic Probabilistic Pruning: A general framework for hardware-constrained pruning at different granularities
L. Gonzalez-Carabarin
Iris A. M. Huijben
Bastian Veeling
A. Schmid
Ruud J. G. van Sloun
19
10
0
26 May 2021
Methods for Pruning Deep Neural Networks
Methods for Pruning Deep Neural Networks
S. Vadera
Salem Ameen
3DPC
21
122
0
31 Oct 2020
BAMSProd: A Step towards Generalizing the Adaptive Optimization Methods
  to Deep Binary Model
BAMSProd: A Step towards Generalizing the Adaptive Optimization Methods to Deep Binary Model
Junjie Liu
Dongchao Wen
Deyu Wang
Wei Tao
Tse-Wei Chen
Kinya Osa
Masami Kato
MQ
29
1
0
29 Sep 2020
Learned Low Precision Graph Neural Networks
Learned Low Precision Graph Neural Networks
Yiren Zhao
Duo Wang
Daniel Bates
Robert D. Mullins
M. Jamnik
Pietro Lió
GNN
36
34
0
19 Sep 2020
AUSN: Approximately Uniform Quantization by Adaptively Superimposing
  Non-uniform Distribution for Deep Neural Networks
AUSN: Approximately Uniform Quantization by Adaptively Superimposing Non-uniform Distribution for Deep Neural Networks
Fangxin Liu
Wenbo Zhao
Yanzhi Wang
Changzhi Dai
Li Jiang
MQ
25
3
0
08 Jul 2020
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
Xiaohan Ding
Tianxiang Hao
Jianchao Tan
Ji Liu
Jungong Han
Yuchen Guo
Guiguang Ding
21
163
0
07 Jul 2020
Neural Network Activation Quantization with Bitwise Information
  Bottlenecks
Neural Network Activation Quantization with Bitwise Information Bottlenecks
Xichuan Zhou
Kui Liu
Cong Shi
Haijun Liu
Ji Liu
MQ
27
1
0
09 Jun 2020
LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network
  Inference
LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference
Erwei Wang
James J. Davis
P. Cheung
George A. Constantinides
MQ
9
41
0
24 Oct 2019
Automatic Generation of Multi-precision Multi-arithmetic CNN
  Accelerators for FPGAs
Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Yiren Zhao
Xitong Gao
Xuan Guo
Junyi Liu
Erwei Wang
Robert D. Mullins
P. Cheung
George A. Constantinides
Chengzhong Xu
MQ
24
31
0
21 Oct 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
1