ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09723
  4. Cited By
Fast Sparse ConvNets

Fast Sparse ConvNets

21 November 2019
Erich Elsen
Marat Dukhan
Trevor Gale
Karen Simonyan
ArXivPDFHTML

Papers citing "Fast Sparse ConvNets"

32 / 32 papers shown
Title
Realizing Unaligned Block-wise Pruning for DNN Acceleration on Mobile
  Devices
Realizing Unaligned Block-wise Pruning for DNN Acceleration on Mobile Devices
Hayun Lee
Dongkun Shin
MQ
26
0
0
29 Jul 2024
When Foresight Pruning Meets Zeroth-Order Optimization: Efficient
  Federated Learning for Low-Memory Devices
When Foresight Pruning Meets Zeroth-Order Optimization: Efficient Federated Learning for Low-Memory Devices
Peng Zhang
Yingjie Liu
Yingbo Zhou
Xiao Du
Xian Wei
Ting Wang
Mingsong Chen
FedML
17
1
0
08 May 2024
MaxQ: Multi-Axis Query for N:M Sparsity Network
MaxQ: Multi-Axis Query for N:M Sparsity Network
Jingyang Xiang
Siqi Li
Junhao Chen
Zhuangzhi Chen
Tianxin Huang
Linpeng Peng
Yong-Jin Liu
16
0
0
12 Dec 2023
Neural Networks at a Fraction with Pruned Quaternions
Neural Networks at a Fraction with Pruned Quaternions
Sahel Mohammad Iqbal
Subhankar Mishra
28
4
0
13 Aug 2023
STen: Productive and Efficient Sparsity in PyTorch
STen: Productive and Efficient Sparsity in PyTorch
Andrei Ivanov
Nikoli Dryden
Tal Ben-Nun
Saleh Ashkboos
Torsten Hoefler
32
4
0
15 Apr 2023
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI
  Feedback
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI Feedback
O. Erak
H. Abou-zeid
47
5
0
20 Jan 2023
DRESS: Dynamic REal-time Sparse Subnets
DRESS: Dynamic REal-time Sparse Subnets
Zhongnan Qu
Syed Shakib Sarwar
Xin Dong
Yuecheng Li
Huseyin Ekin Sumbul
B. D. Salvo
3DH
16
1
0
01 Jul 2022
Playing Lottery Tickets in Style Transfer Models
Playing Lottery Tickets in Style Transfer Models
Meihao Kong
Jing Huo
Wenbin Li
Jing Wu
Yu-Kun Lai
Yang Gao
17
1
0
25 Mar 2022
Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core
  Aware Weight Pruning
Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core Aware Weight Pruning
Guyue Huang
Haoran Li
Minghai Qin
Fei Sun
Yufei Din
Yuan Xie
22
18
0
09 Mar 2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing
  Performance
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
34
8
0
05 Mar 2022
Iterative Activation-based Structured Pruning
Iterative Activation-based Structured Pruning
Kaiqi Zhao
Animesh Jain
Ming Zhao
34
0
0
22 Jan 2022
How Well Do Sparse Imagenet Models Transfer?
How Well Do Sparse Imagenet Models Transfer?
Eugenia Iofinova
Alexandra Peste
Mark Kurtz
Dan Alistarh
19
38
0
26 Nov 2021
Joint Channel and Weight Pruning for Model Acceleration on Moblie
  Devices
Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices
Tianli Zhao
Xi Sheryl Zhang
Wentao Zhu
Jiaxing Wang
Sen Yang
Ji Liu
Jian Cheng
43
2
0
15 Oct 2021
End-to-End Supermask Pruning: Learning to Prune Image Captioning Models
End-to-End Supermask Pruning: Learning to Prune Image Captioning Models
J. Tan
C. Chan
Joon Huang Chuah
VLM
49
16
0
07 Oct 2021
Prioritized Subnet Sampling for Resource-Adaptive Supernet Training
Prioritized Subnet Sampling for Resource-Adaptive Supernet Training
Bohong Chen
Mingbao Lin
Rongrong Ji
Liujuan Cao
19
2
0
12 Sep 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
31
0
0
01 Sep 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
OOD
28
49
0
28 Jun 2021
Efficient Deep Learning: A Survey on Making Deep Learning Models
  Smaller, Faster, and Better
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
Gaurav Menghani
VLM
MedIm
23
365
0
16 Jun 2021
1xN Pattern for Pruning Convolutional Neural Networks
1xN Pattern for Pruning Convolutional Neural Networks
Mingbao Lin
Yu-xin Zhang
Yuchao Li
Bohong Chen
Fei Chao
Mengdi Wang
Shen Li
Yonghong Tian
Rongrong Ji
3DPC
31
40
0
31 May 2021
Post-training deep neural network pruning via layer-wise calibration
Post-training deep neural network pruning via layer-wise calibration
Ivan Lazarevich
Alexander Kozlov
Nikita Malinin
3DPC
16
25
0
30 Apr 2021
Playing Lottery Tickets with Vision and Language
Playing Lottery Tickets with Vision and Language
Zhe Gan
Yen-Chun Chen
Linjie Li
Tianlong Chen
Yu Cheng
Shuohang Wang
Jingjing Liu
Lijuan Wang
Zicheng Liu
VLM
103
53
0
23 Apr 2021
Lottery Jackpots Exist in Pre-trained Models
Lottery Jackpots Exist in Pre-trained Models
Yu-xin Zhang
Mingbao Lin
Yan Wang
Fei Chao
Rongrong Ji
30
15
0
18 Apr 2021
Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm
Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm
Dongkuan Xu
Ian En-Hsu Yen
Jinxi Zhao
Zhibin Xiao
VLM
AAML
28
56
0
18 Apr 2021
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
84
35
0
16 Feb 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
46
110
0
16 Feb 2021
Are wider nets better given the same number of parameters?
Are wider nets better given the same number of parameters?
A. Golubeva
Behnam Neyshabur
Guy Gur-Ari
16
44
0
27 Oct 2020
The Hardware Lottery
The Hardware Lottery
Sara Hooker
25
202
0
14 Sep 2020
Graph Structure of Neural Networks
Graph Structure of Neural Networks
Jiaxuan You
J. Leskovec
Kaiming He
Saining Xie
GNN
19
136
0
13 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
46
81
0
02 Jul 2020
Sparse GPU Kernels for Deep Learning
Sparse GPU Kernels for Deep Learning
Trevor Gale
Matei A. Zaharia
C. Young
Erich Elsen
10
227
0
18 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip H. S. Torr
Grégory Rogez
P. Dokania
21
95
0
16 Jun 2020
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,326
0
05 Nov 2016
1