ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXivPDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 746 papers shown
Title
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite
  Variance Assumption
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite Variance Assumption
Lihu Xu
Fang Yao
Qiuran Yao
Huiming Zhang
38
10
0
10 Jan 2022
Glance and Focus Networks for Dynamic Visual Recognition
Glance and Focus Networks for Dynamic Visual Recognition
Gao Huang
Yulin Wang
Kangchen Lv
Haojun Jiang
Wenhui Huang
Pengfei Qi
S. Song
3DH
79
49
0
09 Jan 2022
Automatic Mixed-Precision Quantization Search of BERT
Automatic Mixed-Precision Quantization Search of BERT
Changsheng Zhao
Ting Hua
Yilin Shen
Qian Lou
Hongxia Jin
MQ
25
19
0
30 Dec 2021
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via
  Dense-To-Sparse Gate
EvoMoE: An Evolutional Mixture-of-Experts Training Framework via Dense-To-Sparse Gate
Xiaonan Nie
Xupeng Miao
Shijie Cao
Lingxiao Ma
Qibin Liu
Jilong Xue
Youshan Miao
Yi Liu
Zhi-Xin Yang
Bin Cui
MoMe
MoE
29
23
0
29 Dec 2021
Neural Network Module Decomposition and Recomposition
Neural Network Module Decomposition and Recomposition
Hiroaki Kingetsu
Kenichi Kobayashi
Taiji Suzuki
27
10
0
25 Dec 2021
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Sung Une Lee
Boming Xia
Yongan Zhang
Ang Li
Yingyan Lin
GNN
60
48
0
22 Dec 2021
Automated Deep Learning: Neural Architecture Search Is Not the End
Automated Deep Learning: Neural Architecture Search Is Not the End
Xuanyi Dong
D. Kedziora
Katarzyna Musial
Bogdan Gabrys
34
26
0
16 Dec 2021
Visualizing the Loss Landscape of Winning Lottery Tickets
Visualizing the Loss Landscape of Winning Lottery Tickets
Robert Bain
UQCV
33
3
0
16 Dec 2021
Pruning Coherent Integrated Photonic Neural Networks Using the Lottery
  Ticket Hypothesis
Pruning Coherent Integrated Photonic Neural Networks Using the Lottery Ticket Hypothesis
Sanmitra Banerjee
Mahdi Nikdast
S. Pasricha
Krishnendu Chakrabarty
38
10
0
14 Dec 2021
From Dense to Sparse: Contrastive Pruning for Better Pre-trained
  Language Model Compression
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Runxin Xu
Fuli Luo
Chengyu Wang
Baobao Chang
Jun Huang
Songfang Huang
Fei Huang
VLM
27
25
0
14 Dec 2021
Achieving Low Complexity Neural Decoders via Iterative Pruning
Achieving Low Complexity Neural Decoders via Iterative Pruning
Vikrant Malik
Rohan Ghosh
Mehul Motani
14
2
0
11 Dec 2021
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
88
9
0
07 Dec 2021
Enhanced Exploration in Neural Feature Selection for Deep Click-Through
  Rate Prediction Models via Ensemble of Gating Layers
Enhanced Exploration in Neural Feature Selection for Deep Click-Through Rate Prediction Models via Ensemble of Gating Layers
L. Guan
Xia Xiao
Ming-yue Chen
Youlong Cheng
27
1
0
07 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao Song
Atri Rudra
Christopher Ré
33
75
0
30 Nov 2021
Embedding Principle: a hierarchical structure of loss landscape of deep
  neural networks
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Tao Luo
Yuqing Li
Zhongwang Zhang
Yaoyu Zhang
Z. Xu
29
22
0
30 Nov 2021
How Well Do Sparse Imagenet Models Transfer?
How Well Do Sparse Imagenet Models Transfer?
Eugenia Iofinova
Alexandra Peste
Mark Kurtz
Dan Alistarh
27
38
0
26 Nov 2021
Intrinsic Dimension, Persistent Homology and Generalization in Neural
  Networks
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Tolga Birdal
Aaron Lou
Leonidas J. Guibas
Umut cSimcsekli
32
61
0
25 Nov 2021
Hidden-Fold Networks: Random Recurrent Residuals Using Sparse Supermasks
Hidden-Fold Networks: Random Recurrent Residuals Using Sparse Supermasks
Ángel López García-Arias
Masanori Hashimoto
Masato Motomura
Jaehoon Yu
39
5
0
24 Nov 2021
Pruning Self-attentions into Convolutional Layers in Single Path
Pruning Self-attentions into Convolutional Layers in Single Path
Haoyu He
Jianfei Cai
Jing Liu
Zizheng Pan
Jing Zhang
Dacheng Tao
Bohan Zhuang
ViT
34
40
0
23 Nov 2021
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
  Mobile Acceleration
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Yifan Gong
Geng Yuan
Zheng Zhan
Wei Niu
Zhengang Li
...
Sijia Liu
Bin Ren
Xue Lin
Xulong Tang
Yanzhi Wang
30
10
0
22 Nov 2021
Neural Fields in Visual Computing and Beyond
Neural Fields in Visual Computing and Beyond
Yiheng Xie
Towaki Takikawa
Shunsuke Saito
Or Litany
Shiqin Yan
Numair Khan
Federico Tombari
James Tompkin
Vincent Sitzmann
Srinath Sridhar
3DH
85
617
0
22 Nov 2021
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
Arthur Douillard
Alexandre Ramé
Guillaume Couairon
Matthieu Cord
CLL
30
299
0
22 Nov 2021
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
26
0
0
19 Nov 2021
Training Neural Networks with Fixed Sparse Masks
Training Neural Networks with Fixed Sparse Masks
Yi-Lin Sung
Varun Nair
Colin Raffel
FedML
32
197
0
18 Nov 2021
deepstruct -- linking deep learning and graph theory
deepstruct -- linking deep learning and graph theory
Julian Stier
Michael Granitzer
GNN
PINN
13
2
0
12 Nov 2021
Prune Once for All: Sparse Pre-Trained Language Models
Prune Once for All: Sparse Pre-Trained Language Models
Ofir Zafrir
Ariel Larey
Guy Boudoukh
Haihao Shen
Moshe Wasserblat
VLM
34
82
0
10 Nov 2021
Revisiting Methods for Finding Influential Examples
Revisiting Methods for Finding Influential Examples
Karthikeyan K
Anders Søgaard
TDI
22
30
0
08 Nov 2021
Gabor filter incorporated CNN for compression
Gabor filter incorporated CNN for compression
Akihiro Imamura
N. Arizumi
CVBM
28
2
0
29 Oct 2021
NxMTransformer: Semi-Structured Sparsification for Natural Language
  Understanding via ADMM
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM
Connor Holmes
Minjia Zhang
Yuxiong He
Bo Wu
37
18
0
28 Oct 2021
RGP: Neural Network Pruning through Its Regular Graph Structure
RGP: Neural Network Pruning through Its Regular Graph Structure
Zhuangzhi Chen
Jingyang Xiang
Yao Lu
Qi Xuan
Xiaoniu Yang
27
1
0
28 Oct 2021
Meta-Learning Sparse Implicit Neural Representations
Meta-Learning Sparse Implicit Neural Representations
Jaehoon Lee
Jihoon Tack
Namhoon Lee
Jinwoo Shin
29
44
0
27 Oct 2021
Diversity Enhanced Active Learning with Strictly Proper Scoring Rules
Diversity Enhanced Active Learning with Strictly Proper Scoring Rules
Wei Tan
Lan Du
Wray Buntine
16
30
0
27 Oct 2021
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAML
OOD
33
29
0
26 Oct 2021
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the
  Edge
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
30
89
0
26 Oct 2021
ConformalLayers: A non-linear sequential neural network with associative
  layers
ConformalLayers: A non-linear sequential neural network with associative layers
Zhen Wan
Zhuoyuan Mao
C. N. Vasconcelos
22
3
0
23 Oct 2021
When to Prune? A Policy towards Early Structural Pruning
When to Prune? A Policy towards Early Structural Pruning
Maying Shen
Pavlo Molchanov
Hongxu Yin
J. Álvarez
VLM
28
53
0
22 Oct 2021
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded
  learning
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded learning
Soufiane Hayou
Bo He
Gintare Karolina Dziugaite
37
2
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and
  Generalization Error
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
39
12
0
21 Oct 2021
Lottery Tickets with Nonzero Biases
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
27
6
0
21 Oct 2021
HALP: Hardware-Aware Latency Pruning
HALP: Hardware-Aware Latency Pruning
Maying Shen
Hongxu Yin
Pavlo Molchanov
Lei Mao
Jianna Liu
J. Álvarez
VLM
46
13
0
20 Oct 2021
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based
  Networks
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
Shiyu Liu
Chong Min John Tan
Mehul Motani
CLL
29
4
0
17 Oct 2021
A Unified Speaker Adaptation Approach for ASR
A Unified Speaker Adaptation Approach for ASR
Yingzhu Zhao
Chongjia Ni
C. Leung
Shafiq Joty
Chng Eng Siong
B. Ma
CLL
92
9
0
16 Oct 2021
Training Deep Neural Networks with Joint Quantization and Pruning of
  Weights and Activations
Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations
Xinyu Zhang
Ian Colbert
Ken Kreutz-Delgado
Srinjoy Das
MQ
37
11
0
15 Oct 2021
Joint Channel and Weight Pruning for Model Acceleration on Moblie
  Devices
Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices
Tianli Zhao
Xi Sheryl Zhang
Wentao Zhu
Jiaxing Wang
Sen Yang
Ji Liu
Jian Cheng
56
2
0
15 Oct 2021
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language
  Models
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models
Xin Zhou
Ruotian Ma
Tao Gui
Y. Tan
Qi Zhang
Xuanjing Huang
VLM
18
5
0
14 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural
  Networks
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
39
217
0
12 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning
  via Sparse Networks
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
Ghada Sokar
Decebal Constantin Mocanu
Mykola Pechenizkiy
CLL
35
8
0
11 Oct 2021
ProgFed: Effective, Communication, and Computation Efficient Federated
  Learning by Progressive Training
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
Hui-Po Wang
Sebastian U. Stich
Yang He
Mario Fritz
FedML
AI4CE
36
48
0
11 Oct 2021
Does Preprocessing Help Training Over-parameterized Neural Networks?
Does Preprocessing Help Training Over-parameterized Neural Networks?
Zhao Song
Shuo Yang
Ruizhe Zhang
45
49
0
09 Oct 2021
Previous
123...91011...131415
Next