ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.05807
  4. Cited By
Improve Convolutional Neural Network Pruning by Maximizing Filter
  Variety

Improve Convolutional Neural Network Pruning by Maximizing Filter Variety

11 March 2022
Nathan Hubens
M. Mancas
B. Gosselin
Marius Preda
T. Zaharia
ArXiv (abs)PDFHTML

Papers citing "Improve Convolutional Neural Network Pruning by Maximizing Filter Variety"

12 / 12 papers shown
Title
One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
Nathan Hubens
M. Mancas
B. Gosselin
Marius Preda
T. Zaharia
69
8
0
05 Jul 2021
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Victor Sanh
Thomas Wolf
Alexander M. Rush
79
487
0
15 May 2020
fastai: A Layered API for Deep Learning
fastai: A Layered API for Deep Learning
Jeremy Howard
Sylvain Gugger
AI4CE
135
871
0
11 Feb 2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
163
630
0
11 Dec 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
167
763
0
25 Feb 2019
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
42
1,477
0
11 Oct 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
293
3,489
0
09 Mar 2018
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
202
1,282
0
05 Oct 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
216
2,534
0
19 Jul 2017
Variational Dropout Sparsifies Deep Neural Networks
Variational Dropout Sparsifies Deep Neural Networks
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
BDL
181
831
0
19 Jan 2017
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
198
3,707
0
31 Aug 2016
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
323
6,715
0
08 Jun 2015
1