Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.08576
Cited By
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
18 September 2020
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pruning Neural Networks at Initialization: Why are We Missing the Mark?"
6 / 56 papers shown
Title
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
42
87
0
07 Oct 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
Jason D. Lee
16
85
0
22 Sep 2020
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,032
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
235
383
0
05 Mar 2020
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
159
236
0
04 Mar 2020
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
127
577
0
27 Feb 2015
Previous
1
2