ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.02643
  4. Cited By
The Unreasonable Effectiveness of Random Pruning: Return of the Most
  Naive Baseline for Sparse Training

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

5 February 2022
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Li Shen
Decebal Constantin Mocanu
Zhangyang Wang
Mykola Pechenizkiy
ArXivPDFHTML

Papers citing "The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training"

21 / 21 papers shown
Title
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Chuan Sun
Han Yu
Lizhen Cui
Xiaoxiao Li
178
0
0
03 May 2025
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
86
0
0
30 Apr 2025
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
Tianjin Huang
Ziquan Zhu
Gaojie Jin
Lu Liu
Zhangyang Wang
Shiwei Liu
50
1
0
12 Jan 2025
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Elia Cunegatti
Leonardo Lucio Custode
Giovanni Iacca
52
0
0
11 Nov 2024
Two Sparse Matrices are Better than One: Sparsifying Neural Networks
  with Double Sparse Factorization
Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization
Vladimír Boža
Vladimír Macko
35
1
0
27 Sep 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
49
3
0
19 Aug 2024
Network Fission Ensembles for Low-Cost Self-Ensembles
Network Fission Ensembles for Low-Cost Self-Ensembles
Hojung Lee
Jong-Seok Lee
UQCV
64
0
0
05 Aug 2024
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
46
0
0
29 Mar 2024
Stochastic Subnetwork Annealing: A Regularization Technique for Fine
  Tuning Pruned Subnetworks
Stochastic Subnetwork Annealing: A Regularization Technique for Fine Tuning Pruned Subnetworks
Tim Whitaker
Darrell Whitley
35
0
0
16 Jan 2024
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for
  Pruning LLMs to High Sparsity
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
Lu Yin
You Wu
Zhenyu Zhang
Cheng-Yu Hsieh
Yaqing Wang
...
Mykola Pechenizkiy
Yi Liang
Michael Bendersky
Zhangyang Wang
Shiwei Liu
36
79
0
08 Oct 2023
Is Last Layer Re-Training Truly Sufficient for Robustness to Spurious
  Correlations?
Is Last Layer Re-Training Truly Sufficient for Robustness to Spurious Correlations?
Phuong Quynh Le
Jorg Schlotterer
Christin Seifert
OOD
21
6
0
01 Aug 2023
Exploring the Performance of Pruning Methods in Neural Networks: An
  Empirical Study of the Lottery Ticket Hypothesis
Exploring the Performance of Pruning Methods in Neural Networks: An Empirical Study of the Lottery Ticket Hypothesis
Eirik Fladmark
Muhammad Hamza Sajjad
Laura Brinkholm Justesen
28
2
0
26 Mar 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training
  Efficiency
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
38
3
0
21 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
37
29
0
03 Mar 2023
Progressive Learning without Forgetting
Progressive Learning without Forgetting
Tao Feng
Hangjie Yuan
Mang Wang
Ziyuan Huang
Ang Bian
Jianzhou Zhang
CLL
KELM
44
4
0
28 Nov 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
27
3
0
28 Oct 2022
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation
  Approach
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach
Peng Mi
Li Shen
Tianhe Ren
Yiyi Zhou
Xiaoshuai Sun
Rongrong Ji
Dacheng Tao
AAML
38
69
0
11 Oct 2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing
  Performance
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
38
8
0
05 Mar 2022
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
156
345
0
23 Jul 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
235
383
0
05 Mar 2020
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,683
0
05 Dec 2016
1