ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.05601
  4. Cited By
Pruning Deep Neural Networks from a Sparsity Perspective
v1v2v3 (latest)

Pruning Deep Neural Networks from a Sparsity Perspective

11 February 2023
Enmao Diao
G. Wang
Jiawei Zhan
Yuhong Yang
Jie Ding
Vahid Tarokh
ArXiv (abs)PDFHTML

Papers citing "Pruning Deep Neural Networks from a Sparsity Perspective"

23 / 23 papers shown
Title
Federated Learning Challenges and Opportunities: An Outlook
Federated Learning Challenges and Opportunities: An Outlook
Jie Ding
Eric W. Tramel
Anit Kumar Sahu
Shuang Wu
Salman Avestimehr
Tao Zhang
FedML
116
57
0
01 Feb 2022
SPIDER: Searching Personalized Neural Architecture for Federated
  Learning
SPIDER: Searching Personalized Neural Architecture for Federated Learning
Erum Mushtaq
Chaoyang He
Jie Ding
A. Avestimehr
FedML
77
20
0
27 Dec 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
314
723
0
31 Jan 2021
HeteroFL: Computation and Communication Efficient Federated Learning for
  Heterogeneous Clients
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
Enmao Diao
Jie Ding
Vahid Tarokh
FedML
96
558
0
03 Oct 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
275
388
0
05 Mar 2020
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Mao Ye
Chengyue Gong
Lizhen Nie
Denny Zhou
Adam R. Klivans
Qiang Liu
66
111
0
03 Mar 2020
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets Winners
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
197
602
0
25 Nov 2019
Speech Emotion Recognition with Dual-Sequence LSTM Architecture
Speech Emotion Recognition with Dual-Sequence LSTM Architecture
Jianyou Wang
Michael Xue
Ryan Culhane
Enmao Diao
Jie Ding
Vahid Tarokh
AI4TS
52
113
0
20 Oct 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
161
761
0
25 Feb 2019
Model Selection Techniques -- An Overview
Model Selection Techniques -- An Overview
Jie Ding
Vahid Tarokh
Yuhong Yang
156
259
0
22 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
263
1,206
0
04 Oct 2018
Data-Dependent Coresets for Compressing Neural Networks with
  Applications to Generalization Bounds
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
Cenk Baykal
Lucas Liebenwein
Igor Gilitschenski
Dan Feldman
Daniela Rus
74
79
0
15 Apr 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
242
3,484
0
09 Mar 2018
Stronger generalization bounds for deep nets via a compression approach
Stronger generalization bounds for deep nets via a compression approach
Sanjeev Arora
Rong Ge
Behnam Neyshabur
Yi Zhang
MLTAI4CE
86
643
0
14 Feb 2018
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
283
8,904
0
25 Aug 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
58
1,760
0
20 Jul 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
204
2,525
0
19 Jul 2017
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
306
4,649
0
18 Oct 2016
WaveNet: A Generative Model for Raw Audio
WaveNet: A Generative Model for Raw Audio
Aaron van den Oord
Sander Dieleman
Heiga Zen
Karen Simonyan
Oriol Vinyals
Alex Graves
Nal Kalchbrenner
A. Senior
Koray Kavukcuoglu
DiffM
406
7,405
0
12 Sep 2016
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
333
8,169
0
13 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
349
7,995
0
23 May 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,192
0
16 Mar 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
261
8,854
0
01 Oct 2015
1