ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06955
  4. Cited By
Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets
  Win

Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win

13 June 2021
Jaron Maene
Mingxiao Li
Marie-Francine Moens
ArXivPDFHTML

Papers citing "Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win"

24 / 24 papers shown
Title
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
128
4
0
19 Aug 2024
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
46
93
0
07 Oct 2020
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Pruning Neural Networks at Initialization: Why are We Missing the Mark?
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
55
240
0
18 Sep 2020
Pruning neural networks without any data by iteratively conserving
  synaptic flow
Pruning neural networks without any data by iteratively conserving synaptic flow
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
131
641
0
09 Jun 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
255
1,047
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
258
387
0
05 Mar 2020
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random
  Features in CNNs
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle
D. Schwab
Ari S. Morcos
62
141
0
29 Feb 2020
The Early Phase of Neural Network Training
The Early Phase of Neural Network Training
Jonathan Frankle
D. Schwab
Ari S. Morcos
70
172
0
24 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable Sparsity
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
122
246
0
08 Feb 2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
138
617
0
11 Dec 2019
Winning the Lottery with Continuous Sparsification
Winning the Lottery with Continuous Sparsification
Pedro H. P. Savarese
Hugo Silva
Michael Maire
61
135
0
10 Dec 2019
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets Winners
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
160
600
0
25 Nov 2019
What Do Compressed Deep Neural Networks Forget?
What Do Compressed Deep Neural Networks Forget?
Sara Hooker
Aaron Courville
Gregory Clark
Yann N. Dauphin
Andrea Frome
72
184
0
13 Nov 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
112
339
0
10 Jul 2019
One ticket to win them all: generalizing lottery ticket initializations
  across datasets and optimizers
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Ari S. Morcos
Haonan Yu
Michela Paganini
Yuandong Tian
54
229
0
06 Jun 2019
Sparse Transfer Learning via Winning Lottery Tickets
Sparse Transfer Learning via Winning Lottery Tickets
Rahul Mehta
UQCV
52
45
0
19 May 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
49
386
0
03 May 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
123
758
0
25 Feb 2019
An Empirical Model of Large-Batch Training
An Empirical Model of Large-Batch Training
Sam McCandlish
Jared Kaplan
Dario Amodei
OpenAI Dota Team
63
277
0
14 Dec 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,471
0
11 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
225
1,196
0
04 Oct 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
198
3,457
0
09 Mar 2018
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal
Piotr Dollár
Ross B. Girshick
P. Noordhuis
Lukasz Wesolowski
Aapo Kyrola
Andrew Tulloch
Yangqing Jia
Kaiming He
3DH
118
3,675
0
08 Jun 2017
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.7K
193,426
0
10 Dec 2015
1