ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXivPDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

30 / 730 papers shown
Title
Mixed Dimension Embeddings with Application to Memory-Efficient
  Recommendation Systems
Mixed Dimension Embeddings with Application to Memory-Efficient Recommendation Systems
Antonio A. Ginart
Maxim Naumov
Dheevatsa Mudigere
Jiyan Yang
James Zou
22
99
0
25 Sep 2019
RNN Architecture Learning with Sparse Regularization
RNN Architecture Learning with Sparse Regularization
Jesse Dodge
Roy Schwartz
Hao Peng
Noah A. Smith
20
10
0
06 Sep 2019
Image Captioning with Sparse Recurrent Neural Network
Image Captioning with Sparse Recurrent Neural Network
J. Tan
Chee Seng Chan
Joon Huang Chuah
VLM
29
6
0
28 Aug 2019
A deep-learning-based surrogate model for data assimilation in dynamic
  subsurface flow problems
A deep-learning-based surrogate model for data assimilation in dynamic subsurface flow problems
Meng Tang
Yimin Liu
L. Durlofsky
AI4CE
32
257
0
16 Aug 2019
Convolutional Dictionary Learning in Hierarchical Networks
Convolutional Dictionary Learning in Hierarchical Networks
Javier Zazo
Bahareh Tolooshams
Demba E. Ba
BDL
37
5
0
23 Jul 2019
Padé Activation Units: End-to-end Learning of Flexible Activation
  Functions in Deep Networks
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Alejandro Molina
P. Schramowski
Kristian Kersting
ODL
23
79
0
15 Jul 2019
Bringing Giant Neural Networks Down to Earth with Unlabeled Data
Bringing Giant Neural Networks Down to Earth with Unlabeled Data
Yehui Tang
Shan You
Chang Xu
Boxin Shi
Chao Xu
24
11
0
13 Jul 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
20
334
0
10 Jul 2019
XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera
XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera
Dushyant Mehta
Oleksandr Sotnychenko
Franziska Mueller
Weipeng Xu
Mohamed A. Elgharib
Pascal Fua
Hans-Peter Seidel
Helge Rhodin
Gerard Pons-Moll
Christian Theobalt
3DH
18
168
0
01 Jul 2019
Selection via Proxy: Efficient Data Selection for Deep Learning
Selection via Proxy: Efficient Data Selection for Deep Learning
Cody Coleman
Christopher Yeh
Stephen Mussmann
Baharan Mirzasoleiman
Peter Bailis
Percy Liang
J. Leskovec
Matei A. Zaharia
33
330
0
26 Jun 2019
Weight Agnostic Neural Networks
Weight Agnostic Neural Networks
Adam Gaier
David R Ha
OOD
38
239
0
11 Jun 2019
One ticket to win them all: generalizing lottery ticket initializations
  across datasets and optimizers
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Ari S. Morcos
Haonan Yu
Michela Paganini
Yuandong Tian
16
228
0
06 Jun 2019
SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained
  Microcontrollers
SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Igor Fedorov
Ryan P. Adams
Matthew Mattina
P. Whatmough
13
165
0
28 May 2019
Self-supervised audio representation learning for mobile devices
Self-supervised audio representation learning for mobile devices
Marco Tagliasacchi
Beat Gfeller
Félix de Chaumont Quitry
Dominik Roblek
SSL
AI4TS
6
46
0
24 May 2019
Structured Compression by Weight Encryption for Unstructured Pruning and
  Quantization
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization
S. Kwon
Dongsoo Lee
Byeongwook Kim
Parichay Kapoor
Baeseong Park
Gu-Yeon Wei
MQ
21
48
0
24 May 2019
How Can We Be So Dense? The Benefits of Using Highly Sparse
  Representations
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Subutai Ahmad
Luiz Scheinkman
33
96
0
27 Mar 2019
Convolution with even-sized kernels and symmetric padding
Convolution with even-sized kernels and symmetric padding
Shuang Wu
Guanrui Wang
Pei Tang
F. Chen
Luping Shi
22
68
0
20 Mar 2019
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Zahra Atashgahi
Joost Pieterse
Shiwei Liu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
35
15
0
17 Mar 2019
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention
  across Neural Network Layers
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers
Baihan Lin
19
2
0
27 Feb 2019
Parameter Efficient Training of Deep Convolutional Neural Networks by
  Dynamic Sparse Reparameterization
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization
Hesham Mostafa
Xin Wang
37
307
0
15 Feb 2019
Intrinsically Sparse Long Short-Term Memory Networks
Intrinsically Sparse Long Short-Term Memory Networks
Shiwei Liu
Decebal Constantin Mocanu
Mykola Pechenizkiy
30
9
0
26 Jan 2019
A Theoretical Analysis of Deep Q-Learning
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan
Zhuoran Yang
Yuchen Xie
Zhaoran Wang
23
596
0
01 Jan 2019
Neural Rejuvenation: Improving Deep Network Training by Enhancing
  Computational Resource Utilization
Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization
Siyuan Qiao
Zhe-nan Lin
Jianming Zhang
Alan Yuille
13
23
0
02 Dec 2018
Structured Pruning of Neural Networks with Budget-Aware Regularization
Structured Pruning of Neural Networks with Budget-Aware Regularization
Carl Lemaire
Andrew Achkar
Pierre-Marc Jodoin
27
92
0
23 Nov 2018
Rethinking the Value of Network Pruning
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
10
1,453
0
11 Oct 2018
A Closer Look at Structured Pruning for Neural Network Compression
A Closer Look at Structured Pruning for Neural Network Compression
Elliot J. Crowley
Jack Turner
Amos Storkey
Michael F. P. O'Boyle
3DPC
29
31
0
10 Oct 2018
To compress or not to compress: Understanding the Interactions between
  Adversarial Attacks and Neural Network Compression
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
Ross J. Anderson
AAML
11
43
0
29 Sep 2018
Learning Representations for Neural Network-Based Classification Using
  the Information Bottleneck Principle
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle
Rana Ali Amjad
Bernhard C. Geiger
35
196
0
27 Feb 2018
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
287
9,156
0
06 Jun 2015
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,639
0
03 Jul 2012
Previous
123...131415