ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.01067
  4. Cited By
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
v1v2v3v4 (latest)

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

3 May 2019
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
    UQCV
ArXiv (abs)PDFHTML

Papers citing "Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask"

50 / 250 papers shown
Title
Bespoke vs. Prêt-à-Porter Lottery Tickets: Exploiting Mask
  Similarity for Trainable Sub-Network Finding
Bespoke vs. Prêt-à-Porter Lottery Tickets: Exploiting Mask Similarity for Trainable Sub-Network Finding
Michela Paganini
Jessica Zosa Forde
UQCV
51
6
0
06 Jul 2020
Deep Partial Updating: Towards Communication Efficient Updating for
  On-device Inference
Deep Partial Updating: Towards Communication Efficient Updating for On-device Inference
Zhongnan Qu
Cong Liu
Lothar Thiele
3DH
77
3
0
06 Jul 2020
Training highly effective connectivities within neural networks with
  randomly initialized, fixed weights
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
Cristian Ivan
Razvan V. Florian
29
4
0
30 Jun 2020
The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
  Architectures
The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network Architectures
Yawei Li
Wen Li
Martin Danelljan
Peng Sun
Shuhang Gu
Luc Van Gool
Radu Timofte
86
18
0
29 Jun 2020
ESPN: Extremely Sparse Pruned Networks
ESPN: Extremely Sparse Pruned Networks
Minsu Cho
Ameya Joshi
Chinmay Hegde
54
9
0
28 Jun 2020
Supermasks in Superposition
Supermasks in Superposition
Mitchell Wortsman
Vivek Ramanujan
Rosanne Liu
Aniruddha Kembhavi
Mohammad Rastegari
J. Yosinski
Ali Farhadi
SSLCLL
111
297
0
26 Jun 2020
Data-dependent Pruning to find the Winning Lottery Ticket
Data-dependent Pruning to find the Winning Lottery Ticket
Dániel Lévai
Zsolt Zombori
UQCV
28
0
0
25 Jun 2020
Topological Insights into Sparse Neural Networks
Topological Insights into Sparse Neural Networks
Shiwei Liu
T. Lee
Anil Yaman
Zahra Atashgahi
David L. Ferraro
Ghada Sokar
Mykola Pechenizkiy
Decebal Constantin Mocanu
61
30
0
24 Jun 2020
NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep
  Neural Networks
NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep Neural Networks
Eugene Lee
Chen-Yi Lee
47
14
0
23 Jun 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
71
4
0
19 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization
  is Sufficient
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
64
104
0
14 Jun 2020
How many winning tickets are there in one DNN?
How many winning tickets are there in one DNN?
Kathrin Grosse
Michael Backes
UQCV
36
2
0
12 Jun 2020
Pruning neural networks without any data by iteratively conserving
  synaptic flow
Pruning neural networks without any data by iteratively conserving synaptic flow
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
206
650
0
09 Jun 2020
Shapley Value as Principled Metric for Structured Network Pruning
Shapley Value as Principled Metric for Structured Network Pruning
Marco Ancona
Cengiz Öztireli
Markus Gross
62
9
0
02 Jun 2020
Pruning via Iterative Ranking of Sensitivity Statistics
Pruning via Iterative Ranking of Sensitivity Statistics
Stijn Verdenius
M. Stol
Patrick Forré
AAML
82
38
0
01 Jun 2020
Dynamic Sparsity Neural Networks for Automatic Speech Recognition
Dynamic Sparsity Neural Networks for Automatic Speech Recognition
Zhaofeng Wu
Ding Zhao
Qiao Liang
Jiahui Yu
Anmol Gulati
Ruoming Pang
51
41
0
16 May 2020
Successfully Applying the Stabilized Lottery Ticket Hypothesis to the
  Transformer Architecture
Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture
Christopher Brix
Parnia Bahar
Hermann Ney
59
38
0
04 May 2020
When BERT Plays the Lottery, All Tickets Are Winning
When BERT Plays the Lottery, All Tickets Are Winning
Sai Prasanna
Anna Rogers
Anna Rumshisky
MILM
88
187
0
01 May 2020
Out-of-the-box channel pruned networks
Out-of-the-box channel pruned networks
Ragav Venkatesan
Gurumurthy Swaminathan
Xiong Zhou
Anna Luo
29
0
0
30 Apr 2020
Masking as an Efficient Alternative to Finetuning for Pretrained
  Language Models
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
Mengjie Zhao
Tao R. Lin
Fei Mi
Martin Jaggi
Hinrich Schütze
77
121
0
26 Apr 2020
How fine can fine-tuning be? Learning efficient language models
How fine can fine-tuning be? Learning efficient language models
Evani Radiya-Dixit
Xin Wang
53
66
0
24 Apr 2020
CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through
  Context
CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Wenyu Zhang
Skyler Seto
Devesh K. Jha
71
5
0
26 Mar 2020
Pruned Neural Networks are Surprisingly Modular
Pruned Neural Networks are Surprisingly Modular
Daniel Filan
Shlomi Hod
Cody Wild
Andrew Critch
Stuart J. Russell
32
8
0
10 Mar 2020
Train-by-Reconnect: Decoupling Locations of Weights from their Values
Train-by-Reconnect: Decoupling Locations of Weights from their Values
Yushi Qiu
R. Suda
20
0
0
05 Mar 2020
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random
  Features in CNNs
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle
D. Schwab
Ari S. Morcos
115
143
0
29 Feb 2020
Learned Threshold Pruning
Learned Threshold Pruning
K. Azarian
Yash Bhalgat
Jinwon Lee
Tijmen Blankevoort
MQ
79
38
0
28 Feb 2020
The Early Phase of Neural Network Training
The Early Phase of Neural Network Training
Jonathan Frankle
D. Schwab
Ari S. Morcos
96
174
0
24 Feb 2020
Calibrate and Prune: Improving Reliability of Lottery Tickets Through
  Prediction Calibration
Calibrate and Prune: Improving Reliability of Lottery Tickets Through Prediction Calibration
Bindya Venkatesh
Jayaraman J. Thiagarajan
Kowshik Thopalli
P. Sattigeri
70
14
0
10 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable Sparsity
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
167
247
0
08 Feb 2020
Multimodal Controller for Generative Models
Multimodal Controller for Generative Models
Enmao Diao
Jie Ding
Vahid Tarokh
74
3
0
07 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
130
276
0
03 Feb 2020
On Iterative Neural Network Pruning, Reinitialization, and the
  Similarity of Masks
On Iterative Neural Network Pruning, Reinitialization, and the Similarity of Masks
Michela Paganini
Jessica Zosa Forde
84
19
0
14 Jan 2020
Pruning Convolutional Neural Networks with Self-Supervision
Pruning Convolutional Neural Networks with Self-Supervision
Mathilde Caron
Ari S. Morcos
Piotr Bojanowski
Julien Mairal
Armand Joulin
SSL3DPC
59
12
0
10 Jan 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
137
169
0
19 Dec 2019
WaLDORf: Wasteless Language-model Distillation On Reading-comprehension
WaLDORf: Wasteless Language-model Distillation On Reading-comprehension
J. Tian
A. Kreuzer
Pai-Hung Chen
Hans-Martin Will
VLM
60
3
0
13 Dec 2019
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
181
630
0
11 Dec 2019
Winning the Lottery with Continuous Sparsification
Winning the Lottery with Continuous Sparsification
Pedro H. P. Savarese
Hugo Silva
Michael Maire
95
137
0
10 Dec 2019
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
74
362
0
29 Nov 2019
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets Winners
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
228
610
0
25 Nov 2019
Student Specialization in Deep ReLU Networks With Finite Width and Input
  Dimension
Student Specialization in Deep ReLU Networks With Finite Width and Input Dimension
Yuandong Tian
MLT
57
8
0
30 Sep 2019
Class-dependent Compression of Deep Neural Networks
Class-dependent Compression of Deep Neural Networks
R. Entezari
O. Saukh
73
7
0
23 Sep 2019
LCA: Loss Change Allocation for Neural Network Training
LCA: Loss Change Allocation for Neural Network Training
Janice Lan
Rosanne Liu
Hattie Zhou
J. Yosinski
73
25
0
03 Sep 2019
Learning Digital Circuits: A Journey Through Weight Invariant
  Self-Pruning Neural Networks
Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks
Amey Agrawal
Rohit Karlupia
30
0
0
30 Aug 2019
Image Captioning with Sparse Recurrent Neural Network
Image Captioning with Sparse Recurrent Neural Network
J. Tan
Chee Seng Chan
Joon Huang Chuah
VLM
47
6
0
28 Aug 2019
Deep network as memory space: complexity, generalization, disentangled
  representation and interpretability
Deep network as memory space: complexity, generalization, disentangled representation and interpretability
X. Dong
L. Zhou
46
1
0
12 Jul 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
159
341
0
10 Jul 2019
Weight Agnostic Neural Networks
Weight Agnostic Neural Networks
Adam Gaier
David R Ha
OOD
67
242
0
11 Jun 2019
Playing the lottery with rewards and multiple languages: lottery tickets
  in RL and NLP
Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP
Haonan Yu
Sergey Edunov
Yuandong Tian
Ari S. Morcos
62
150
0
06 Jun 2019
Luck Matters: Understanding Training Dynamics of Deep ReLU Networks
Luck Matters: Understanding Training Dynamics of Deep ReLU Networks
Yuandong Tian
Tina Jiang
Qucheng Gong
Ari S. Morcos
169
25
0
31 May 2019
Sparse Transfer Learning via Winning Lottery Tickets
Sparse Transfer Learning via Winning Lottery Tickets
Rahul Mehta
UQCV
77
45
0
19 May 2019
Previous
12345