ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.06307
  4. Cited By
A Signal Propagation Perspective for Pruning Neural Networks at
  Initialization

A Signal Propagation Perspective for Pruning Neural Networks at Initialization

14 June 2019
Namhoon Lee
Thalaiyasingam Ajanthan
Stephen Gould
Philip Torr
    AAML
ArXivPDFHTML

Papers citing "A Signal Propagation Perspective for Pruning Neural Networks at Initialization"

34 / 34 papers shown
Title
Mutual Information Preserving Neural Network Pruning
Mutual Information Preserving Neural Network Pruning
Charles Westphal
Stephen Hailes
Mirco Musolesi
57
1
0
31 Oct 2024
ONNXPruner: ONNX-Based General Model Pruning Adapter
ONNXPruner: ONNX-Based General Model Pruning Adapter
Dongdong Ren
Wenbin Li
Tianyu Ding
Lei Wang
Qi Fan
Jing Huo
Hongbing Pan
Yang Gao
46
3
0
10 Apr 2024
Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space
Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space
Mahsa Mozafari-Nia
Salimeh Yasaei Sekeh
23
0
0
29 Feb 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
Neural Networks at a Fraction with Pruned Quaternions
Neural Networks at a Fraction with Pruned Quaternions
Sahel Mohammad Iqbal
Subhankar Mishra
28
4
0
13 Aug 2023
TIPS: Topologically Important Path Sampling for Anytime Neural Networks
TIPS: Topologically Important Path Sampling for Anytime Neural Networks
Guihong Li
Kartikeya Bhardwaj
Yuedong Yang
R. Marculescu
AAML
46
0
0
13 May 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
42
19
0
06 Apr 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
48
9
0
28 Feb 2023
Why is the State of Neural Network Pruning so Confusing? On the
  Fairness, Comparison Setup, and Trainability in Network Pruning
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
Huan Wang
Can Qin
Yue Bai
Yun Fu
37
20
0
12 Jan 2023
The Effect of Data Dimensionality on Neural Network Prunability
The Effect of Data Dimensionality on Neural Network Prunability
Zachary Ankner
Alex Renda
Gintare Karolina Dziugaite
Jonathan Frankle
Tian Jin
36
5
0
01 Dec 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
27
3
0
28 Oct 2022
On skip connections and normalisation layers in deep optimisation
On skip connections and normalisation layers in deep optimisation
L. MacDonald
Jack Valmadre
Hemanth Saratchandran
Simon Lucey
ODL
34
1
0
10 Oct 2022
Scale-invariant Bayesian Neural Networks with Connectivity Tangent
  Kernel
Scale-invariant Bayesian Neural Networks with Connectivity Tangent Kernel
Sungyub Kim
Si-hun Park
Kyungsu Kim
Eunho Yang
BDL
32
4
0
30 Sep 2022
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
42
9
0
08 Jul 2022
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
81
27
0
17 Jun 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCV
MLT
42
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
79
18
0
04 May 2022
Training-free Transformer Architecture Search
Training-free Transformer Architecture Search
Qinqin Zhou
Kekai Sheng
Xiawu Zheng
Ke Li
Xing Sun
Yonghong Tian
Jie Chen
Rongrong Ji
ViT
45
46
0
23 Mar 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
44
48
0
09 Mar 2022
Meta-Learning Sparse Implicit Neural Representations
Meta-Learning Sparse Implicit Neural Representations
Jaehoon Lee
Jihoon Tack
Namhoon Lee
Jinwoo Shin
24
44
0
27 Oct 2021
Lottery Tickets with Nonzero Biases
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
27
6
0
21 Oct 2021
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting
  and Output Merging
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
28
15
0
30 Sep 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
OOD
31
49
0
28 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
40
112
0
19 Jun 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
38
64
0
11 Mar 2021
Reduced-Order Neural Network Synthesis with Robustness Guarantees
Reduced-Order Neural Network Synthesis with Robustness Guarantees
R. Drummond
M. Turner
S. Duncan
27
9
0
18 Feb 2021
Rethinking the Value of Transformer Components
Rethinking the Value of Transformer Components
Wenxuan Wang
Zhaopeng Tu
13
38
0
07 Nov 2020
Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads
Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads
Zhengyan Zhang
Fanchao Qi
Zhiyuan Liu
Qun Liu
Maosong Sun
VLM
46
30
0
07 Nov 2020
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
40
87
0
07 Oct 2020
OrthoReg: Robust Network Pruning Using Orthonormality Regularization
OrthoReg: Robust Network Pruning Using Orthonormality Regularization
Ekdeep Singh Lubana
Puja Trivedi
C. Hougen
Robert P. Dick
Alfred Hero
37
1
0
10 Sep 2020
Revisiting Loss Modelling for Unstructured Pruning
Revisiting Loss Modelling for Unstructured Pruning
César Laurent
Camille Ballas
Thomas George
Nicolas Ballas
Pascal Vincent
30
14
0
22 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip Torr
Grégory Rogez
P. Dokania
31
95
0
16 Jun 2020
Gradient Monitored Reinforcement Learning
Gradient Monitored Reinforcement Learning
Mohammed Sharafath Abdul Hameed
Gavneet Singh Chadha
Andreas Schwung
S. Ding
33
10
0
25 May 2020
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
244
350
0
14 Jun 2018
1