Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.05467
Cited By
Pruning neural networks without any data by iteratively conserving synaptic flow
9 June 2020
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pruning neural networks without any data by iteratively conserving synaptic flow"
50 / 113 papers shown
Title
GreenFactory: Ensembling Zero-Cost Proxies to Estimate Performance of Neural Networks
Gabriel Cortes
Nuno Lourenço
Paolo Romano
Penousal Machado
UQCV
FedML
37
0
0
14 May 2025
Onboard Optimization and Learning: A Survey
Monirul Islam Pavel
Siyi Hu
Mahardhika Pratama
Ryszard Kowalczyk
26
0
0
07 May 2025
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
54
0
0
29 Apr 2025
RBFleX-NAS: Training-Free Neural Architecture Search Using Radial Basis Function Kernel and Hyperparameter Detection
Tomomasa Yamasaki
Zhehui Wang
Tao Luo
Niangjun Chen
Bo Wang
32
0
0
26 Mar 2025
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation
Pavel Rumiantsev
Mark Coates
55
0
0
27 Feb 2025
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
Boqian Wu
Q. Xiao
Shiwei Liu
Lu Yin
Mykola Pechenizkiy
Decebal Constantin Mocanu
M. V. Keulen
Elena Mocanu
MedIm
65
4
0
20 Feb 2025
NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance
Raphael T. Husistein
Markus Reiher
Marco Eckhoff
142
1
0
20 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
UPAQ: A Framework for Real-Time and Energy-Efficient 3D Object Detection in Autonomous Vehicles
Abhishek Balasubramaniam
Febin P. Sunny
S. Pasricha
3DPC
44
0
0
08 Jan 2025
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Andy Li
A. Durrant
Milan Markovic
Lu Yin
Georgios Leontidis
Tianlong Chen
Lu Yin
Georgios Leontidis
75
0
0
20 Nov 2024
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
Stephen Zhang
Vardan Papyan
VLM
51
1
0
20 Sep 2024
NASH: Neural Architecture and Accelerator Search for Multiplication-Reduced Hybrid Models
Yang Xu
Huihong Shi
Zhongfeng Wang
45
0
0
07 Sep 2024
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
47
3
0
19 Aug 2024
Network Fission Ensembles for Low-Cost Self-Ensembles
Hojung Lee
Jong-Seok Lee
UQCV
64
0
0
05 Aug 2024
Efficient Multi-Objective Neural Architecture Search via Pareto Dominance-based Novelty Search
An Vo
Ngoc Hoang Luong
33
0
0
30 Jul 2024
A Generic Layer Pruning Method for Signal Modulation Recognition Deep Learning Models
Yao Lu
Yutao Zhu
Yuqi Li
Dongwei Xu
Yun Lin
Qi Xuan
Xiaoniu Yang
39
5
0
12 Jun 2024
Dual sparse training framework: inducing activation map sparsity via Transformed
ℓ
1
\ell1
ℓ
1
regularization
Xiaolong Yu
Cong Tian
52
0
0
30 May 2024
Survival of the Fittest Representation: A Case Study with Modular Addition
Xiaoman Delores Ding
Zifan Carl Guo
Eric J. Michaud
Ziming Liu
Max Tegmark
48
3
0
27 May 2024
Retrievable Domain-Sensitive Feature Memory for Multi-Domain Recommendation
Yuang Zhao
Zhaocheng Du
Qinglin Jia
Linxuan Zhang
Zhenhua Dong
Ruiming Tang
38
2
0
21 May 2024
The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth
Victor Quétu
Zhu Liao
Enzo Tartaglione
49
4
0
27 Apr 2024
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization
Bailey J. Eccles
Leon Wong
Blesson Varghese
38
2
0
22 Apr 2024
Anytime Neural Architecture Search on Tabular Data
Naili Xing
Shaofeng Cai
Zhaojing Luo
Bengchin Ooi
Jian Pei
34
1
0
15 Mar 2024
Robustifying and Boosting Training-Free Neural Architecture Search
Zhenfeng He
Yao Shu
Zhongxiang Dai
K. H. Low
43
1
0
12 Mar 2024
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models
Amit Dhurandhar
Tejaswini Pedapati
Ronny Luss
Soham Dan
Aurélie C. Lozano
Payel Das
Georgios Kollias
22
3
0
28 Feb 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
52
10
0
23 Dec 2023
Adaptive Model Pruning and Personalization for Federated Learning over Wireless Networks
Xiaonan Liu
T. Ratnarajah
M. Sellathurai
Yonina C. Eldar
32
4
0
04 Sep 2023
An Evaluation of Zero-Cost Proxies -- from Neural Architecture Performance to Model Robustness
Jovita Lukasik
Michael Moeller
M. Keuper
30
1
0
18 Jul 2023
Biologically-Motivated Learning Model for Instructed Visual Processing
R. Abel
S. Ullman
25
0
0
04 Jun 2023
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Moonseok Choi
Hyungi Lee
G. Nam
Juho Lee
40
2
0
24 May 2023
Masked Bayesian Neural Networks : Theoretical Guarantee and its Posterior Inference
Insung Kong
Dongyoon Yang
Jongjin Lee
Ilsang Ohn
Gyuseung Baek
Yongdai Kim
BDL
31
4
0
24 May 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
42
19
0
06 Apr 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
31
3
0
21 Mar 2023
Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions
Kaiqi Zhao
Animesh Jain
Ming Zhao
26
10
0
14 Mar 2023
Efficient Transformer-based 3D Object Detection with Dynamic Token Halting
Mao Ye
Gregory P. Meyer
Yuning Chai
Qiang Liu
32
8
0
09 Mar 2023
Maximizing Spatio-Temporal Entropy of Deep 3D CNNs for Efficient Video Recognition
Junyan Wang
Zhenhong Sun
Yichen Qian
Dong Gong
Xiuyu Sun
Ming Lin
M. Pagnucco
Yang Song
3DPC
20
11
0
05 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
32
29
0
03 Mar 2023
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
48
9
0
28 Feb 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
37
1
0
22 Feb 2023
Synaptic Stripping: How Pruning Can Bring Dead Neurons Back To Life
Tim Whitaker
L. D. Whitley
CVBM
33
2
0
11 Feb 2023
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
H. Yamasaki
Sathyawageeswar Subramanian
Satoshi Hayakawa
Sho Sonoda
MLT
30
4
0
27 Jan 2023
ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients
Guihong Li
Yuedong Yang
Kartikeya Bhardwaj
R. Marculescu
36
61
0
26 Jan 2023
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
40
7
0
19 Jan 2023
Efficient Evaluation Methods for Neural Architecture Search: A Survey
Xiangning Xie
Xiaotian Song
Zeqiong Lv
Gary G. Yen
Weiping Ding
Yizhou Sun
32
12
0
14 Jan 2023
COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks
Md. Ismail Hossain
Mohammed Rakib
M. M. L. Elahi
Nabeel Mohammed
Shafin Rahman
21
1
0
24 Dec 2022
DAS: Neural Architecture Search via Distinguishing Activation Score
Yuqiao Liu
Haipeng Li
Yanan Sun
Shuaicheng Liu
36
1
0
23 Dec 2022
Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Qiao Xiao
Boqian Wu
Yu Zhang
Shiwei Liu
Mykola Pechenizkiy
Elena Mocanu
Decebal Constantin Mocanu
AI4TS
38
28
0
19 Dec 2022
Can We Find Strong Lottery Tickets in Generative Models?
Sangyeop Yeo
Yoojin Jang
Jy-yong Sohn
Dongyoon Han
Jaejun Yoo
20
6
0
16 Dec 2022
Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning
Yingchun Wang
Song Guo
Jingcai Guo
Weizhan Zhang
Yi Tian Xu
Jiewei Zhang
Yi Liu
21
17
0
07 Dec 2022
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
29
19
0
30 Nov 2022
1
2
3
Next