ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.01089
  4. Cited By
No Free Prune: Information-Theoretic Barriers to Pruning at
  Initialization

No Free Prune: Information-Theoretic Barriers to Pruning at Initialization

2 February 2024
Tanishq Kumar
Kevin Luo
Mark Sellke
ArXivPDFHTML

Papers citing "No Free Prune: Information-Theoretic Barriers to Pruning at Initialization"

30 / 30 papers shown
Title
Six Lectures on Linearized Neural Networks
Six Lectures on Linearized Neural Networks
Theodor Misiakiewicz
Andrea Montanari
90
13
0
25 Aug 2023
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning
  Ticket's Mask?
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Mansheej Paul
F. Chen
Brett W. Larsen
Jonathan Frankle
Surya Ganguli
Gintare Karolina Dziugaite
UQCV
96
38
0
06 Oct 2022
Rare Gems: Finding Lottery Tickets at Initialization
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
48
42
0
24 Feb 2022
A Universal Law of Robustness via Isoperimetry
A Universal Law of Robustness via Isoperimetry
Sébastien Bubeck
Mark Sellke
38
218
0
26 May 2021
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip Torr
Grégory Rogez
P. Dokania
69
95
0
16 Jun 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
267
1,050
0
06 Mar 2020
SpArch: Efficient Architecture for Sparse Matrix Multiplication
SpArch: Efficient Architecture for Sparse Matrix Multiplication
Zhekai Zhang
Hanrui Wang
Song Han
W. Dally
57
230
0
20 Feb 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
64
83
0
19 Feb 2020
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
66
357
0
29 Nov 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
123
494
0
12 Jun 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
52
387
0
03 May 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
161
758
0
25 Feb 2019
Tightening Mutual Information Based Bounds on Generalization Error
Tightening Mutual Information Based Bounds on Generalization Error
Yuheng Bu
Shaofeng Zou
Venugopal V. Veeravalli
56
177
0
15 Jan 2019
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,470
0
11 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
257
1,205
0
04 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
219
1,272
0
04 Oct 2018
The phase transition for the existence of the maximum likelihood
  estimate in high-dimensional logistic regression
The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression
Emmanuel J. Candes
Pragya Sur
58
140
0
25 Apr 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
230
3,473
0
09 Mar 2018
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Yihui He
Ji Lin
Zhijian Liu
Hanrui Wang
Li Li
Song Han
95
1,347
0
10 Feb 2018
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
194
1,276
0
05 Oct 2017
Implicit Regularization in Deep Learning
Implicit Regularization in Deep Learning
Behnam Neyshabur
55
146
0
06 Sep 2017
Exploring the Regularity of Sparse Structure in Convolutional Neural
  Networks
Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Huizi Mao
Song Han
Jeff Pool
Wenshuo Li
Xingyu Liu
Yu Wang
W. Dally
94
243
0
24 May 2017
Information-theoretic analysis of generalization capability of learning
  algorithms
Information-theoretic analysis of generalization capability of learning algorithms
Aolin Xu
Maxim Raginsky
166
447
0
22 May 2017
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
772
36,813
0
25 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
178
2,339
0
12 Aug 2016
Demystifying Fixed k-Nearest Neighbor Information Estimators
Demystifying Fixed k-Nearest Neighbor Information Estimators
Weihao Gao
Sewoong Oh
Pramod Viswanath
63
138
0
11 Apr 2016
EIE: Efficient Inference Engine on Compressed Deep Neural Network
EIE: Efficient Inference Engine on Compressed Deep Neural Network
Song Han
Xingyu Liu
Huizi Mao
Jing Pu
A. Pedram
M. Horowitz
W. Dally
121
2,457
0
04 Feb 2016
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
310
6,681
0
08 Jun 2015
On the Computational Efficiency of Training Neural Networks
On the Computational Efficiency of Training Neural Networks
Roi Livni
Shai Shalev-Shwartz
Ohad Shamir
143
480
0
05 Oct 2014
Network In Network
Network In Network
Min Lin
Qiang Chen
Shuicheng Yan
291
6,275
0
16 Dec 2013
1