ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.10404
  4. Cited By
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

19 June 2021
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
ArXivPDFHTML

Papers citing "Sparse Training via Boosting Pruning Plasticity with Neuroregeneration"

28 / 78 papers shown
Title
Dynamic Sparse Training via Balancing the Exploration-Exploitation
  Trade-off
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
26
19
0
30 Nov 2022
You Can Have Better Graph Neural Networks by Not Training Weights at
  All: Finding Untrained GNNs Tickets
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
Tianjin Huang
Tianlong Chen
Meng Fang
Vlado Menkovski
Jiaxu Zhao
...
Yulong Pei
Decebal Constantin Mocanu
Zhangyang Wang
Mykola Pechenizkiy
Shiwei Liu
GNN
47
14
0
28 Nov 2022
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware
  Training
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Mingliang Xu
Gongrui Nan
Yuxin Zhang
Rongrong Ji
Rongrong Ji
MQ
18
3
0
12 Nov 2022
Gradient-based Weight Density Balancing for Robust Dynamic Sparse
  Training
Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training
Mathias Parger
Alexander Ertl
Paul Eibensteiner
J. H. Mueller
Martin Winter
M. Steinberger
34
0
0
25 Oct 2022
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation
  Approach
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach
Peng Mi
Li Shen
Tianhe Ren
Yiyi Zhou
Xiaoshuai Sun
Rongrong Ji
Dacheng Tao
AAML
29
69
0
11 Oct 2022
Why Random Pruning Is All We Need to Start Sparse
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
43
19
0
05 Oct 2022
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
  Networks
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
Chuang Liu
Xueqi Ma
Yinbing Zhan
Liang Ding
Dapeng Tao
Bo Du
Wenbin Hu
Danilo Mandic
42
28
0
18 Jul 2022
Learning Best Combination for Efficient N:M Sparsity
Learning Best Combination for Efficient N:M Sparsity
Yuxin Zhang
Mingbao Lin
Zhihang Lin
Yiting Luo
Ke Li
Rongrong Ji
Yongjian Wu
Rongrong Ji
29
49
0
14 Jun 2022
Recall Distortion in Neural Network Pruning and the Undecayed Pruning
  Algorithm
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
Aidan Good
Jia-Huei Lin
Hannah Sieg
Mikey Ferguson
Xin Yu
Shandian Zhe
J. Wieczorek
Thiago Serra
37
11
0
07 Jun 2022
Fine-tuning Language Models over Slow Networks using Activation
  Compression with Guarantees
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Jue Wang
Binhang Yuan
Luka Rimanic
Yongjun He
Tri Dao
Beidi Chen
Christopher Ré
Ce Zhang
AI4CE
24
11
0
02 Jun 2022
DisPFL: Towards Communication-Efficient Personalized Federated Learning
  via Decentralized Sparse Training
DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training
Rong Dai
Li Shen
Fengxiang He
Xinmei Tian
Dacheng Tao
FedML
22
112
0
01 Jun 2022
Superposing Many Tickets into One: A Performance Booster for Sparse
  Neural Network Training
Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training
Lu Yin
Vlado Menkovski
Meng Fang
Tianjin Huang
Yulong Pei
Mykola Pechenizkiy
Decebal Constantin Mocanu
Shiwei Liu
35
8
0
30 May 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCV
MLT
40
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
74
18
0
04 May 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
42
48
0
09 Mar 2022
The Unreasonable Effectiveness of Random Pruning: Return of the Most
  Naive Baseline for Sparse Training
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Li Shen
Decebal Constantin Mocanu
Zhangyang Wang
Mykola Pechenizkiy
11
106
0
05 Feb 2022
Signing the Supermask: Keep, Hide, Invert
Signing the Supermask: Keep, Hide, Invert
Nils Koster
O. Grothe
Achim Rettinger
31
10
0
31 Jan 2022
OptG: Optimizing Gradient-driven Criteria in Network Sparsity
OptG: Optimizing Gradient-driven Criteria in Network Sparsity
Yuxin Zhang
Mingbao Lin
Yonghong Tian
Rongrong Ji
Rongrong Ji
44
5
0
30 Jan 2022
Federated Dynamic Sparse Training: Computing Less, Communicating Less,
  Yet Learning Better
Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Sameer Bibikar
H. Vikalo
Zhangyang Wang
Xiaohan Chen
FedML
35
96
0
18 Dec 2021
Plant ñ' Seek: Can You Find the Winning Ticket?
Plant ñ' Seek: Can You Find the Winning Ticket?
Jonas Fischer
R. Burkholz
19
21
0
22 Nov 2021
Efficient Neural Network Training via Forward and Backward Propagation
  Sparsification
Efficient Neural Network Training via Forward and Backward Propagation Sparsification
Xiao Zhou
Weizhong Zhang
Zonghao Chen
Shizhe Diao
Tong Zhang
37
46
0
10 Nov 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
OOD
31
49
0
28 Jun 2021
Carbon Emissions and Large Neural Network Training
Carbon Emissions and Large Neural Network Training
David A. Patterson
Joseph E. Gonzalez
Quoc V. Le
Chen Liang
Lluís-Miquel Munguía
D. Rothchild
David R. So
Maud Texier
J. Dean
AI4CE
253
645
0
21 Apr 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
53
111
0
16 Feb 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
141
684
0
31 Jan 2021
A Unified Paths Perspective for Pruning at Initialization
A Unified Paths Perspective for Pruning at Initialization
Thomas Gebhart
Udit Saxena
Paul Schrater
38
14
0
26 Jan 2021
Fixing the train-test resolution discrepancy: FixEfficientNet
Fixing the train-test resolution discrepancy: FixEfficientNet
Hugo Touvron
Andrea Vedaldi
Matthijs Douze
Hervé Jégou
AAML
196
110
0
18 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
383
0
05 Mar 2020
Previous
12