ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.10991
  4. Cited By
Minimum Variance Unbiased N:M Sparsity for the Neural Gradients

Minimum Variance Unbiased N:M Sparsity for the Neural Gradients

21 March 2022
Brian Chmiel
Itay Hubara
Ron Banner
Daniel Soudry
ArXivPDFHTML

Papers citing "Minimum Variance Unbiased N:M Sparsity for the Neural Gradients"

25 / 25 papers shown
Title
Accelerating DNN Training with Structured Data Gradient Pruning
Accelerating DNN Training with Structured Data Gradient Pruning
Bradley McDanel
Helia Dinh
J. Magallanes
59
7
0
01 Feb 2022
Sparse is Enough in Scaling Transformers
Sparse is Enough in Scaling Transformers
Sebastian Jaszczur
Aakanksha Chowdhery
Afroz Mohiuddin
Lukasz Kaiser
Wojciech Gajewski
Henryk Michalewski
Jonni Kanerva
MoE
56
102
0
24 Nov 2021
NxMTransformer: Semi-Structured Sparsification for Natural Language
  Understanding via ADMM
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM
Connor Holmes
Minjia Zhang
Yuxiong He
Bo Wu
54
18
0
28 Oct 2021
Search Spaces for Neural Model Training
Search Spaces for Neural Model Training
Darko Stosic
Dusan Stosic
60
4
0
27 May 2021
Accelerating Sparse Deep Neural Networks
Accelerating Sparse Deep Neural Networks
Asit K. Mishra
J. Latorre
Jeff Pool
Darko Stosic
Dusan Stosic
Ganesh Venkatesh
Chong Yu
Paulius Micikevicius
158
235
0
16 Apr 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
95
119
0
16 Feb 2021
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
201
247
0
08 Feb 2021
Differentiable Joint Pruning and Quantization for Hardware Efficiency
Differentiable Joint Pruning and Quantization for Hardware Efficiency
Ying Wang
Yadong Lu
Tijmen Blankevoort
MQ
58
72
0
20 Jul 2020
Neural gradients are near-lognormal: improved quantized and sparse
  training
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
51
5
0
15 Jun 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
267
387
0
05 Mar 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
91
73
0
07 Jan 2020
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets Winners
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
191
602
0
25 Nov 2019
Loss Aware Post-training Quantization
Loss Aware Post-training Quantization
Yury Nahshan
Brian Chmiel
Chaim Baskin
Evgenii Zheltonozhskii
Ron Banner
A. Bronstein
A. Mendelson
MQ
77
166
0
17 Nov 2019
Learned Step Size Quantization
Learned Step Size Quantization
S. K. Esser
J. McKinstry
Deepika Bablani
R. Appuswamy
D. Modha
MQ
75
806
0
21 Feb 2019
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,470
0
11 Oct 2018
Learning to Quantize Deep Networks by Optimizing Quantization Intervals
  with Task Loss
Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss
S. Jung
Changyong Son
Seohyung Lee
JinWoo Son
Youngjun Kwak
Jae-Joon Han
Sung Ju Hwang
Changkyu Choi
MQ
48
375
0
17 Aug 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
42
76
0
17 Jul 2018
Scalable Methods for 8-bit Training of Neural Networks
Scalable Methods for 8-bit Training of Neural Networks
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
MQ
84
339
0
25 May 2018
PACT: Parameterized Clipping Activation for Quantized Neural Networks
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Jungwook Choi
Zhuo Wang
Swagath Venkataramani
P. Chuang
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
62
953
0
16 May 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
233
3,473
0
09 Mar 2018
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
58
1,760
0
20 Jul 2017
meProp: Sparsified Back Propagation for Accelerated Deep Learning with
  Reduced Overfitting
meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting
Xu Sun
Xuancheng Ren
Shuming Ma
Houfeng Wang
62
157
0
19 Jun 2017
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
193
3,697
0
31 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
178
2,339
0
12 Aug 2016
Optimization Methods for Large-Scale Machine Learning
Optimization Methods for Large-Scale Machine Learning
Léon Bottou
Frank E. Curtis
J. Nocedal
246
3,216
0
15 Jun 2016
1