ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.02804
  4. Cited By
A Deeper Look into Convolutions via Eigenvalue-based Pruning

A Deeper Look into Convolutions via Eigenvalue-based Pruning

4 February 2021
Ilke Çugu
Emre Akbas
    FAtt
ArXivPDFHTML

Papers citing "A Deeper Look into Convolutions via Eigenvalue-based Pruning"

24 / 24 papers shown
Title
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
46
121
0
14 May 2020
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Seul-Ki Yeom
P. Seegerer
Sebastian Lapuschkin
Alexander Binder
Simon Wiedemann
K. Müller
Wojciech Samek
CVBM
53
207
0
18 Dec 2019
Learning Filter Basis for Convolutional Neural Network Compression
Learning Filter Basis for Convolutional Neural Network Compression
Yawei Li
Shuhang Gu
Luc Van Gool
Radu Timofte
SupR
54
99
0
23 Aug 2019
Green AI
Green AI
Roy Schwartz
Jesse Dodge
Noah A. Smith
Oren Etzioni
102
1,141
0
22 Jul 2019
Towards Optimal Structured CNN Pruning via Generative Adversarial
  Learning
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
Shaohui Lin
Rongrong Ji
Chenqian Yan
Baochang Zhang
Liujuan Cao
QiXiang Ye
Feiyue Huang
David Doermann
CVBM
53
508
0
22 Mar 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
227
1,644
0
28 Dec 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,471
0
11 Oct 2018
Coreset-Based Neural Network Compression
Coreset-Based Neural Network Compression
Abhimanyu Dubey
Moitreya Chatterjee
Narendra Ahuja
57
81
0
25 Jul 2018
Data-Dependent Coresets for Compressing Neural Networks with
  Applications to Generalization Bounds
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
Cenk Baykal
Lucas Liebenwein
Igor Gilitschenski
Dan Feldman
Daniela Rus
66
79
0
15 Apr 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
225
3,461
0
09 Mar 2018
Stronger generalization bounds for deep nets via a compression approach
Stronger generalization bounds for deep nets via a compression approach
Sanjeev Arora
Rong Ge
Behnam Neyshabur
Yi Zhang
MLT
AI4CE
84
640
0
14 Feb 2018
NISP: Pruning Networks using Neuron Importance Score Propagation
NISP: Pruning Networks using Neuron Importance Score Propagation
Ruichi Yu
Ang Li
Chun-Fu Chen
Jui-Hsin Lai
Vlad I. Morariu
Xintong Han
M. Gao
Ching-Yung Lin
L. Davis
70
799
0
16 Nov 2017
Learning Efficient Convolutional Networks through Network Slimming
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
122
2,419
0
22 Aug 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
52
1,760
0
20 Jul 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
196
2,522
0
19 Jul 2017
Soft Weight-Sharing for Neural Network Compression
Soft Weight-Sharing for Neural Network Compression
Karen Ullrich
Edward Meeds
Max Welling
164
417
0
13 Feb 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
334
4,625
0
10 Nov 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
188
3,695
0
31 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
176
2,337
0
12 Aug 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
139
7,477
0
24 Feb 2016
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
253
8,832
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
310
6,669
0
08 Jun 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
309
18,609
0
06 Feb 2015
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
587
15,874
0
12 Nov 2013
1