ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

30 / 2,030 papers shown
Title
Sparse Transfer Learning via Winning Lottery Tickets
Sparse Transfer Learning via Winning Lottery Tickets
Rahul Mehta
UQCV
77
45
0
19 May 2019
Network Pruning for Low-Rank Binary Indexing
Network Pruning for Low-Rank Binary Indexing
Dongsoo Lee
S. Kwon
Byeongwook Kim
Parichay Kapoor
Gu-Yeon Wei
49
6
0
14 May 2019
Analysis of Gene Interaction Graphs as Prior Knowledge for Machine
  Learning Models
Analysis of Gene Interaction Graphs as Prior Knowledge for Machine Learning Models
Paul Bertin
Mohammad Hashir
Martin Weiss
Vincent Frappier
T. Perkins
G. Boucher
Joseph Paul Cohen
42
4
0
06 May 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
74
389
0
03 May 2019
Differentiable Visual Computing
Differentiable Visual Computing
Tzu-Mao Li
52
15
0
27 Apr 2019
Low-Memory Neural Network Training: A Technical Report
Low-Memory Neural Network Training: A Technical Report
N. Sohoni
Christopher R. Aberger
Megan Leszczynski
Jian Zhang
Christopher Ré
92
103
0
24 Apr 2019
Filter Pruning by Switching to Neighboring CNNs with Good Attributes
Filter Pruning by Switching to Neighboring CNNs with Good Attributes
Yang He
Ping Liu
Linchao Zhu
Yi Yang
VLM
64
46
0
08 Apr 2019
Adversarial Robustness vs Model Compression, or Both?
Adversarial Robustness vs Model Compression, or Both?
Shaokai Ye
Kaidi Xu
Sijia Liu
Jan-Henrik Lambrechts
Huan Zhang
Aojun Zhou
Kaisheng Ma
Yanzhi Wang
Xue Lin
AAML
88
165
0
29 Mar 2019
How Can We Be So Dense? The Benefits of Using Highly Sparse
  Representations
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Subutai Ahmad
Luiz Scheinkman
91
97
0
27 Mar 2019
Convolution with even-sized kernels and symmetric padding
Convolution with even-sized kernels and symmetric padding
Shuang Wu
Guanrui Wang
Pei Tang
F. Chen
Luping Shi
49
69
0
20 Mar 2019
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Zahra Atashgahi
Joost Pieterse
Shiwei Liu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
74
15
0
17 Mar 2019
Stabilizing the Lottery Ticket Hypothesis
Stabilizing the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
75
103
0
05 Mar 2019
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention
  across Neural Network Layers
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers
Baihan Lin
59
2
0
27 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
114
314
0
15 Feb 2019
Identity Crisis: Memorization and Generalization under Extreme
  Overparameterization
Identity Crisis: Memorization and Generalization under Extreme Overparameterization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Michael C. Mozer
Y. Singer
60
90
0
13 Feb 2019
Intrinsically Sparse Long Short-Term Memory Networks
Intrinsically Sparse Long Short-Term Memory Networks
Shiwei Liu
Decebal Constantin Mocanu
Mykola Pechenizkiy
55
9
0
26 Jan 2019
Sparse evolutionary Deep Learning with over one million artificial
  neurons on commodity hardware
Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware
Shiwei Liu
Decebal Constantin Mocanu
A. R. Ramapuram Matavalam
Yulong Pei
Mykola Pechenizkiy
BDL
87
93
0
26 Jan 2019
A Theoretical Analysis of Deep Q-Learning
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan
Zhuoran Yang
Yuchen Xie
Zhaoran Wang
190
610
0
01 Jan 2019
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Ruoyu Sun
114
38
0
28 Dec 2018
Artificial neural networks condensation: A strategy to facilitate
  adaption of machine learning in medical settings by reducing computational
  burden
Artificial neural networks condensation: A strategy to facilitate adaption of machine learning in medical settings by reducing computational burden
Dianbo Liu
N. Sepulveda
Ming Zheng
60
7
0
23 Dec 2018
Neural Rejuvenation: Improving Deep Network Training by Enhancing
  Computational Resource Utilization
Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization
Siyuan Qiao
Zhe Lin
Jianming Zhang
Alan Yuille
63
23
0
02 Dec 2018
Structured Pruning of Neural Networks with Budget-Aware Regularization
Structured Pruning of Neural Networks with Budget-Aware Regularization
Carl Lemaire
Andrew Achkar
Pierre-Marc Jodoin
79
94
0
23 Nov 2018
The Deep Weight Prior
The Deep Weight Prior
Andrei Atanov
Arsenii Ashukha
Kirill Struminsky
Dmitry Vetrov
Max Welling
BDL
92
37
0
16 Oct 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
44
1,480
0
11 Oct 2018
A Closer Look at Structured Pruning for Neural Network Compression
A Closer Look at Structured Pruning for Neural Network Compression
Elliot J. Crowley
Jack Turner
Amos Storkey
Michael F. P. O'Boyle
3DPC
80
31
0
10 Oct 2018
Learning with Random Learning Rates
Learning with Random Learning Rates
Léonard Blier
Pierre Wolinski
Yann Ollivier
OOD
96
20
0
02 Oct 2018
To compress or not to compress: Understanding the Interactions between
  Adversarial Attacks and Neural Network Compression
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
Ross J. Anderson
AAML
79
43
0
29 Sep 2018
Dense neural networks as sparse graphs and the lightning initialization
Dense neural networks as sparse graphs and the lightning initialization
T. Pircher
D. Haspel
Eberhard Schlücker
13
1
0
24 Sep 2018
Learning Representations for Neural Network-Based Classification Using
  the Information Bottleneck Principle
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle
Rana Ali Amjad
Bernhard C. Geiger
121
197
0
27 Feb 2018
Nonparametric regression using deep neural networks with ReLU activation
  function
Nonparametric regression using deep neural networks with ReLU activation function
Johannes Schmidt-Hieber
242
816
0
22 Aug 2017
Previous
123...394041