Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.03635
Cited By
v1
v2
v3
v4
v5 (latest)
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
9 March 2018
Jonathan Frankle
Michael Carbin
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
50 / 2,030 papers shown
Title
Ultra-light deep MIR by trimming lottery tickets
P. Esling
Théis Bazin
Adrien Bitton
Tristan Carsault
Ninon Devis
25
2
0
31 Jul 2020
Diet deep generative audio models with structured lottery
P. Esling
Ninon Devis
Adrien Bitton
Antoine Caillon
Axel Chemla-Romeu-Santos
Constance Douwes
105
6
0
31 Jul 2020
Growing Efficient Deep Networks by Structured Continuous Sparsification
Xin Yuan
Pedro H. P. Savarese
Michael Maire
3DPC
57
45
0
30 Jul 2020
Hierarchical Action Classification with Network Pruning
Mahdi Davoodikakhki
KangKang Yin
71
20
0
30 Jul 2020
Towards Learning Convolutions from Scratch
Behnam Neyshabur
SSL
301
71
0
27 Jul 2020
Linear discriminant initialization for feed-forward neural networks
Marissa Masden
D. Sinha
FedML
40
3
0
24 Jul 2020
The Representation Theory of Neural Networks
M. Armenta
Pierre-Marc Jodoin
113
31
0
23 Jul 2020
TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning
Han Cai
Chuang Gan
Ligeng Zhu
Song Han
82
53
0
22 Jul 2020
Probabilistic Active Meta-Learning
Jean Kaddour
Steindór Sæmundsson
M. Deisenroth
94
35
0
17 Jul 2020
Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning
Bryn Elesedy
Varun Kanade
Yee Whye Teh
86
30
0
16 Jul 2020
A General Family of Stochastic Proximal Gradient Methods for Deep Learning
Jihun Yun
A. Lozano
Eunho Yang
63
13
0
15 Jul 2020
Adversarial Examples and Metrics
Nico Döttling
Kathrin Grosse
Michael Backes
Ian Molloy
AAML
35
0
0
14 Jul 2020
T-Basis: a Compact Representation for Neural Networks
Anton Obukhov
M. Rakhuba
Stamatios Georgoulis
Menelaos Kanakis
Dengxin Dai
Luc Van Gool
112
27
0
13 Jul 2020
The Computational Limits of Deep Learning
Neil C. Thompson
Kristjan Greenewald
Keeheon Lee
Gabriel F. Manso
VLM
91
530
0
10 Jul 2020
The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain
Xin Wang
34
1
0
07 Jul 2020
Ridge Regression with Over-Parametrized Two-Layer Networks Converge to Ridgelet Spectrum
Sho Sonoda
Isao Ishikawa
Masahiro Ikeda
MLT
16
0
0
07 Jul 2020
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
Xiaohan Ding
Tianxiang Hao
Jianchao Tan
Ji Liu
Jungong Han
Yuchen Guo
Guiguang Ding
89
166
0
07 Jul 2020
Meta-Learning with Network Pruning
Hongduan Tian
Bo Liu
Xiaotong Yuan
Qingshan Liu
51
27
0
07 Jul 2020
Bespoke vs. Prêt-à-Porter Lottery Tickets: Exploiting Mask Similarity for Trainable Sub-Network Finding
Michela Paganini
Jessica Zosa Forde
UQCV
48
6
0
06 Jul 2020
Deep Partial Updating: Towards Communication Efficient Updating for On-device Inference
Zhongnan Qu
Cong Liu
Lothar Thiele
3DH
74
3
0
06 Jul 2020
Meta-Learning through Hebbian Plasticity in Random Networks
Elias Najarro
S. Risi
108
78
0
06 Jul 2020
DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths
Yanwei Fu
Chen Liu
Donghao Li
Xinwei Sun
Jinshan Zeng
Yuan Yao
26
9
0
04 Jul 2020
Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization?
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
ODL
96
14
0
02 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
110
85
0
02 Jul 2020
Go Wide, Then Narrow: Efficient Training of Deep Thin Networks
Denny Zhou
Mao Ye
Chen Chen
Tianjian Meng
Mingxing Tan
Xiaodan Song
Quoc V. Le
Qiang Liu
Dale Schuurmans
61
20
0
01 Jul 2020
A Survey on Self-supervised Pre-training for Sequential Transfer Learning in Neural Networks
H. H. Mao
BDL
SSL
72
50
0
01 Jul 2020
Data-driven Regularization via Racecar Training for Generalizing Neural Networks
You Xie
Nils Thuerey
14
0
0
30 Jun 2020
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin
HyoukJoong Lee
Yuanzhong Xu
Dehao Chen
Orhan Firat
Yanping Huang
M. Krikun
Noam M. Shazeer
Zhiwen Chen
MoE
149
1,195
0
30 Jun 2020
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
Cristian Ivan
Razvan V. Florian
29
4
0
30 Jun 2020
Statistical Mechanical Analysis of Neural Network Pruning
Rupam Acharyya
Ankani Chattoraj
Boyu Zhang
Shouman Das
Daniel Stefankovic
34
0
0
30 Jun 2020
The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network Architectures
Yawei Li
Wen Li
Martin Danelljan
Peng Sun
Shuhang Gu
Luc Van Gool
Radu Timofte
86
18
0
29 Jun 2020
ESPN: Extremely Sparse Pruned Networks
Minsu Cho
Ameya Joshi
Chinmay Hegde
54
9
0
28 Jun 2020
Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions
Stefano Sarao Mannelli
Eric Vanden-Eijnden
Lenka Zdeborová
AI4CE
67
49
0
27 Jun 2020
Supermasks in Superposition
Mitchell Wortsman
Vivek Ramanujan
Rosanne Liu
Aniruddha Kembhavi
Mohammad Rastegari
J. Yosinski
Ali Farhadi
SSL
CLL
105
297
0
26 Jun 2020
The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks
Wei Hu
Lechao Xiao
Ben Adlam
Jeffrey Pennington
69
63
0
25 Jun 2020
Data-dependent Pruning to find the Winning Lottery Ticket
Dániel Lévai
Zsolt Zombori
UQCV
28
0
0
25 Jun 2020
Topological Insights into Sparse Neural Networks
Shiwei Liu
T. Lee
Anil Yaman
Zahra Atashgahi
David L. Ferraro
Ghada Sokar
Mykola Pechenizkiy
Decebal Constantin Mocanu
61
30
0
24 Jun 2020
Ramanujan Bipartite Graph Products for Efficient Block Sparse Neural Networks
Dharma Teja Vooturi
G. Varma
Kishore Kothapalli
47
6
0
24 Jun 2020
Principal Component Networks: Parameter Reduction Early in Training
R. Waleffe
Theodoros Rekatsinas
3DPC
50
9
0
23 Jun 2020
NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep Neural Networks
Eugene Lee
Chen-Yi Lee
47
14
0
23 Jun 2020
Revisiting Loss Modelling for Unstructured Pruning
César Laurent
Camille Ballas
Thomas George
Nicolas Ballas
Pascal Vincent
68
14
0
22 Jun 2020
Neural networks adapting to datasets: learning network size and topology
R. Janik
A. Nowak
AI4CE
24
0
0
22 Jun 2020
Logarithmic Pruning is All You Need
Laurent Orseau
Marcus Hutter
Omar Rivasplata
92
89
0
22 Jun 2020
Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning
M. Song
Jaehong Yoon
Eunho Yang
Sung Ju Hwang
12
1
0
22 Jun 2020
Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Jiankang Deng
Yannis Panagakis
Stefanos Zafeiriou
77
94
0
20 Jun 2020
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
Duong H. Le
Vo Trung Nhan
N. Thoai
VLM
54
7
0
20 Jun 2020
Discovering Symbolic Models from Deep Learning with Inductive Biases
M. Cranmer
Alvaro Sanchez-Gonzalez
Peter W. Battaglia
Rui Xu
Kyle Cranmer
D. Spergel
S. Ho
AI4CE
94
483
0
19 Jun 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
71
4
0
19 Jun 2020
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
76
33
0
16 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip Torr
Grégory Rogez
P. Dokania
110
95
0
16 Jun 2020
Previous
1
2
3
...
35
36
37
...
39
40
41
Next