ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.03665
  4. Cited By
Learning Structured Sparsity in Deep Neural Networks

Learning Structured Sparsity in Deep Neural Networks

12 August 2016
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
ArXivPDFHTML

Papers citing "Learning Structured Sparsity in Deep Neural Networks"

50 / 331 papers shown
Title
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via
  Dense-To-Sparse Gate
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate
Xiaonan Nie
Xupeng Miao
Shijie Cao
Lingxiao Ma
Qibin Liu
Jilong Xue
Youshan Miao
Yi Liu
Zhi-Xin Yang
Bin Cui
MoMe
MoE
29
22
0
29 Dec 2021
A Multi-channel Training Method Boost the Performance
A Multi-channel Training Method Boost the Performance
Yingdong Hu
19
1
0
27 Dec 2021
Compact Multi-level Sparse Neural Networks with Input Independent
  Dynamic Rerouting
Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Minghai Qin
Tianyun Zhang
Fei Sun
Yen-kuang Chen
M. Fardad
Yanzhi Wang
Yuan Xie
49
0
0
21 Dec 2021
Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Fei Sun
Minghai Qin
Tianyun Zhang
Xiaolong Ma
Haoran Li
Junwen Luo
Zihao Zhao
Yen-kuang Chen
Yuan Xie
19
1
0
20 Dec 2021
Training Structured Neural Networks Through Manifold Identification and
  Variance Reduction
Training Structured Neural Networks Through Manifold Identification and Variance Reduction
Zih-Syuan Huang
Ching-pei Lee
AAML
48
9
0
05 Dec 2021
Object-aware Monocular Depth Prediction with Instance Convolutions
Object-aware Monocular Depth Prediction with Instance Convolutions
Enis Simsar
Evin Pınar Örnek
Fabian Manhardt
Helisa Dhamo
Nassir Navab
F. Tombari
3DH
MDE
36
1
0
02 Dec 2021
Morph Detection Enhanced by Structured Group Sparsity
Morph Detection Enhanced by Structured Group Sparsity
Poorya Aghdaie
Baaria Chaudhary
Sobhan Soleymani
J. Dawson
Nasser M. Nasrabadi
CVBM
33
14
0
29 Nov 2021
Mixed Precision Low-bit Quantization of Neural Network Language Models
  for Speech Recognition
Mixed Precision Low-bit Quantization of Neural Network Language Models for Speech Recognition
Junhao Xu
Jianwei Yu
Shoukang Hu
Xunying Liu
Helen Meng
MQ
27
13
0
29 Nov 2021
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
  Mobile Acceleration
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Yifan Gong
Geng Yuan
Zheng Zhan
Wei Niu
Zhengang Li
...
Sijia Liu
Bin Ren
Xue Lin
Xulong Tang
Yanzhi Wang
28
10
0
22 Nov 2021
Self-Compression in Bayesian Neural Networks
Self-Compression in Bayesian Neural Networks
Giuseppina Carannante
Dimah Dera
Ghulam Rasool
N. Bouaynaya
UQCV
BDL
36
5
0
10 Nov 2021
Efficient Neural Network Training via Forward and Backward Propagation
  Sparsification
Efficient Neural Network Training via Forward and Backward Propagation Sparsification
Xiao Zhou
Weizhong Zhang
Zonghao Chen
Shizhe Diao
Tong Zhang
37
46
0
10 Nov 2021
Gabor filter incorporated CNN for compression
Gabor filter incorporated CNN for compression
Akihiro Imamura
N. Arizumi
CVBM
28
2
0
29 Oct 2021
NxMTransformer: Semi-Structured Sparsification for Natural Language
  Understanding via ADMM
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM
Connor Holmes
Minjia Zhang
Yuxiong He
Bo Wu
37
18
0
28 Oct 2021
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the
  Edge
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
23
89
0
26 Oct 2021
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
Yang Sui
Miao Yin
Yi Xie
Huy Phan
S. Zonouz
Bo Yuan
VLM
33
129
0
26 Oct 2021
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning
Yanli Liu
Bochen Guan
Qinwen Xu
Weiyi Li
Shuxue Quan
33
2
0
21 Oct 2021
Joint Channel and Weight Pruning for Model Acceleration on Moblie
  Devices
Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices
Tianli Zhao
Xi Sheryl Zhang
Wentao Zhu
Jiaxing Wang
Sen Yang
Ji Liu
Jian Cheng
56
2
0
15 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
Haar Wavelet Feature Compression for Quantized Graph Convolutional
  Networks
Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks
Moshe Eliasof
Ben Bodner
Eran Treister
GNN
35
7
0
10 Oct 2021
One Timestep is All You Need: Training Spiking Neural Networks with
  Ultra Low Latency
One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency
Sayeed Shafayet Chowdhury
Nitin Rathi
Kaushik Roy
21
40
0
01 Oct 2021
Prune Your Model Before Distill It
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
46
27
0
30 Sep 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
36
0
0
01 Sep 2021
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in
  Deep Neural Networks
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks
Haekyu Park
Nilaksh Das
Rahul Duggal
Austin P. Wright
Omar Shaikh
Fred Hohman
Duen Horng Chau
HAI
19
25
0
29 Aug 2021
Layer-wise Model Pruning based on Mutual Information
Layer-wise Model Pruning based on Mutual Information
Chun Fan
Jiwei Li
Xiang Ao
Fei Wu
Yuxian Meng
Xiaofei Sun
46
19
0
28 Aug 2021
Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture
  and Pruning Search
Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Zheng Zhan
Yifan Gong
Pu Zhao
Geng Yuan
Wei Niu
...
Malith Jayaweera
David Kaeli
Bin Ren
Xue Lin
Yanzhi Wang
SupR
35
41
0
18 Aug 2021
Differentiable Subset Pruning of Transformer Heads
Differentiable Subset Pruning of Transformer Heads
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
45
53
0
10 Aug 2021
Group Fisher Pruning for Practical Network Compression
Group Fisher Pruning for Practical Network Compression
Liyang Liu
Shilong Zhang
Zhanghui Kuang
Aojun Zhou
Jingliang Xue
Xinjiang Wang
Yimin Chen
Wenming Yang
Q. Liao
Wayne Zhang
25
146
0
02 Aug 2021
Attribute Guided Sparse Tensor-Based Model for Person Re-Identification
Attribute Guided Sparse Tensor-Based Model for Person Re-Identification
Fariborz Taherkhani
Ali Dabouei
Sobhan Soleymani
J. Dawson
Nasser M. Nasrabadi
CVBM
38
2
0
29 Jul 2021
R-Drop: Regularized Dropout for Neural Networks
R-Drop: Regularized Dropout for Neural Networks
Xiaobo Liang
Lijun Wu
Juntao Li
Yue Wang
Qi Meng
Tao Qin
Wei Chen
Hao Fei
Tie-Yan Liu
47
424
0
28 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
40
111
0
19 Jun 2021
FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for
  Mixed-signal DNN Accelerator
FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Geng Yuan
Payman Behnam
Zhengang Li
Ali Shafiee
Sheng Lin
...
Hang Liu
Xuehai Qian
M. N. Bojnordi
Yanzhi Wang
Caiwen Ding
24
68
0
16 Jun 2021
A Winning Hand: Compressing Deep Networks Can Improve
  Out-Of-Distribution Robustness
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
James Diffenderfer
Brian Bartoldson
Shreya Chaganti
Jize Zhang
B. Kailkhura
OOD
31
69
0
16 Jun 2021
Patch Slimming for Efficient Vision Transformers
Patch Slimming for Efficient Vision Transformers
Yehui Tang
Kai Han
Yunhe Wang
Chang Xu
Jianyuan Guo
Chao Xu
Dacheng Tao
ViT
24
163
0
05 Jun 2021
Dual-side Sparse Tensor Core
Dual-side Sparse Tensor Core
Yang-Feng Wang
Chen Zhang
Zhiqiang Xie
Cong Guo
Yunxin Liu
Jingwen Leng
25
74
0
20 May 2021
End-to-End Approach for Recognition of Historical Digit Strings
End-to-End Approach for Recognition of Historical Digit Strings
Mengqiao Zhao
A. G. Hochuli
A. Cheddad
30
1
0
28 Apr 2021
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural
  Networks
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks
Sayeed Shafayet Chowdhury
Isha Garg
Kaushik Roy
21
38
0
26 Apr 2021
"BNN - BN = ?": Training Binary Neural Networks without Batch
  Normalization
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization
Tianlong Chen
Zhenyu (Allen) Zhang
Xu Ouyang
Zechun Liu
Zhiqiang Shen
Zhangyang Wang
MQ
43
36
0
16 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future
  Challenges
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
38
401
0
05 Apr 2021
CDFI: Compression-Driven Network Design for Frame Interpolation
CDFI: Compression-Driven Network Design for Frame Interpolation
Tianyu Ding
Luming Liang
Zhihui Zhu
Ilya Zharkov
27
93
0
18 Mar 2021
unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights
  Generation
unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights Generation
Stylianos I. Venieris
Javier Fernandez-Marques
Nicholas D. Lane
24
11
0
09 Mar 2021
Knowledge Evolution in Neural Networks
Knowledge Evolution in Neural Networks
Ahmed Taha
Abhinav Shrivastava
L. Davis
47
21
0
09 Mar 2021
BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network
  Quantization
BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization
Huanrui Yang
Lin Duan
Yiran Chen
Hai Helen Li
MQ
18
64
0
20 Feb 2021
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Ning Liu
Geng Yuan
Zhengping Che
Xuan Shen
Xiaolong Ma
Qing Jin
Jian Ren
Jian Tang
Sijia Liu
Yanzhi Wang
34
30
0
19 Feb 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
59
111
0
16 Feb 2021
Dancing along Battery: Enabling Transformer with Run-time
  Reconfigurability on Mobile Devices
Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices
Yuhong Song
Weiwen Jiang
Bingbing Li
Panjie Qi
Qingfeng Zhuge
E. Sha
Sakyasingha Dasgupta
Yiyu Shi
Caiwen Ding
18
18
0
12 Feb 2021
Learning Task-Oriented Communication for Edge Inference: An Information
  Bottleneck Approach
Learning Task-Oriented Communication for Edge Inference: An Information Bottleneck Approach
Jiawei Shao
Yuyi Mao
Jun Zhang
47
212
0
08 Feb 2021
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
52
240
0
08 Feb 2021
SeReNe: Sensitivity based Regularization of Neurons for Structured
  Sparsity in Neural Networks
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
A. Fiandrotti
Marco Grangetto
40
18
0
07 Feb 2021
AACP: Model Compression by Accurate and Automatic Channel Pruning
AACP: Model Compression by Accurate and Automatic Channel Pruning
Lanbo Lin
Yujiu Yang
Zhenhua Guo
MQ
22
12
0
31 Jan 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
150
675
0
24 Jan 2021
Previous
1234567
Next