Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1707.04780
Cited By
Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science
15 July 2017
Decebal Constantin Mocanu
Elena Mocanu
Peter Stone
Phuong H. Nguyen
M. Gibescu
A. Liotta
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science"
50 / 114 papers shown
Title
Towards Sparsification of Graph Neural Networks
Hongwu Peng
Deniz Gurevin
Shaoyi Huang
Tong Geng
Weiwen Jiang
O. Khan
Caiwen Ding
GNN
30
24
0
11 Sep 2022
4Ward: a Relayering Strategy for Efficient Training of Arbitrarily Complex Directed Acyclic Graphs
T. Boccato
Matteo Ferrante
A. Duggento
N. Toschi
28
2
0
05 Sep 2022
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment
Jie Zhu
Leye Wang
Xiao Han
33
9
0
11 Aug 2022
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
Chuang Liu
Xueqi Ma
Yinbing Zhan
Liang Ding
Dapeng Tao
Bo Du
Wenbin Hu
Danilo Mandic
44
29
0
18 Jul 2022
Context-sensitive neocortical neurons transform the effectiveness and efficiency of neural information processing
Ahsan Adeel
Mario Franco
Mohsin Raza
K. Ahmed
31
9
0
15 Jul 2022
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
John Rachwan
Daniel Zügner
Bertrand Charpentier
Simon Geisler
Morgane Ayle
Stephan Günnemann
32
24
0
21 Jun 2022
Leveraging Structured Pruning of Convolutional Neural Networks
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
David Bertrand
T. Hannagan
CVBM
26
1
0
13 Jun 2022
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Jue Wang
Binhang Yuan
Luka Rimanic
Yongjun He
Tri Dao
Beidi Chen
Christopher Ré
Ce Zhang
AI4CE
31
11
0
02 Jun 2022
Spartan: Differentiable Sparsity via Regularized Transportation
Kai Sheng Tai
Taipeng Tian
Ser-Nam Lim
34
11
0
27 May 2022
Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data
Manar D. Samad
Sakib Abrar
33
12
0
17 May 2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
41
8
0
05 Mar 2022
Evolving Neural Networks with Optimal Balance between Information Flow and Connections Cost
A. Khalili
A. Bouchachia
17
0
0
12 Feb 2022
Signing the Supermask: Keep, Hide, Invert
Nils Koster
O. Grothe
Achim Rettinger
36
10
0
31 Jan 2022
Achieving Personalized Federated Learning with Sparse Local Models
Tiansheng Huang
Shiwei Liu
Li Shen
Fengxiang He
Weiwei Lin
Dacheng Tao
FedML
38
43
0
27 Jan 2022
Direct Mutation and Crossover in Genetic Algorithms Applied to Reinforcement Learning Tasks
Tarek Faycal
Claudio Zito
17
2
0
13 Jan 2022
Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks
Kevin Lee Hunter
Lawrence Spracklen
Subutai Ahmad
28
20
0
27 Dec 2021
Asymptotic properties of one-layer artificial neural networks with sparse connectivity
Christian Hirsch
Matthias Neumann
Volker Schmidt
27
1
0
01 Dec 2021
Efficient Neural Network Training via Forward and Backward Propagation Sparsification
Xiao Zhou
Weizhong Zhang
Zonghao Chen
Shizhe Diao
Tong Zhang
40
46
0
10 Nov 2021
BitTrain: Sparse Bitmap Compression for Memory-Efficient Training on the Edge
Abdelrahman I. Hosny
Marina Neseem
Sherief Reda
MQ
35
4
0
29 Oct 2021
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
28
89
0
26 Oct 2021
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
Ghada Sokar
Decebal Constantin Mocanu
Mykola Pechenizkiy
CLL
35
8
0
11 Oct 2021
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
98
54
0
01 Oct 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
39
0
0
01 Sep 2021
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
OOD
31
49
0
28 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
45
112
0
19 Jun 2021
Efficient Lottery Ticket Finding: Less Data is More
Zhenyu Zhang
Xuxi Chen
Tianlong Chen
Zhangyang Wang
19
54
0
06 Jun 2021
Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?
Dinghuai Zhang
Kartik Ahuja
Yilun Xu
Yisen Wang
Aaron Courville
OOD
30
95
0
05 Jun 2021
ResMLP: Feedforward networks for image classification with data-efficient training
Hugo Touvron
Piotr Bojanowski
Mathilde Caron
Matthieu Cord
Alaaeldin El-Nouby
...
Gautier Izacard
Armand Joulin
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
VLM
41
656
0
07 May 2021
Initialization and Regularization of Factorized Neural Layers
M. Khodak
Neil A. Tenenholtz
Lester W. Mackey
Nicolò Fusi
65
56
0
03 May 2021
Effective Sparsification of Neural Networks with Global Sparsity Constraint
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
21
61
0
03 May 2021
Lottery Jackpots Exist in Pre-trained Models
Yuxin Zhang
Mingbao Lin
Yan Wang
Rongrong Ji
Rongrong Ji
35
15
0
18 Apr 2021
The Elastic Lottery Ticket Hypothesis
Xiaohan Chen
Yu Cheng
Shuohang Wang
Zhe Gan
Jingjing Liu
Zhangyang Wang
OOD
28
34
0
30 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
38
64
0
11 Mar 2021
Sparse Training Theory for Scalable and Efficient Agents
Decebal Constantin Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
45
17
0
02 Mar 2021
Consistent Sparse Deep Learning: Theory and Computation
Y. Sun
Qifan Song
F. Liang
BDL
48
27
0
25 Feb 2021
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
95
35
0
16 Feb 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
59
111
0
16 Feb 2021
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
Aojun Zhou
Yukun Ma
Junnan Zhu
Jianbo Liu
Zhijie Zhang
Kun Yuan
Wenxiu Sun
Hongsheng Li
69
241
0
08 Feb 2021
Interpreting Neural Networks as Gradual Argumentation Frameworks (Including Proof Appendix)
Nico Potyka
AI4CE
41
48
0
10 Dec 2020
Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders
Zahra Atashgahi
Ghada Sokar
T. Lee
Elena Mocanu
Decebal Constantin Mocanu
Raymond N. J. Veldhuis
Mykola Pechenizkiy
19
37
0
01 Dec 2020
FreezeNet: Full Performance by Reduced Storage Costs
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
33
13
0
28 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
31
25
0
20 Nov 2020
Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture using Feedback-Modulated Delay Loops
Florian Stelzer
André Röhm
Raul Vicente
Ingo Fischer
University of Tartu
AI4CE
19
46
0
19 Nov 2020
Layer-adaptive sparsity for the Magnitude-based Pruning
Jaeho Lee
Sejun Park
Sangwoo Mo
Sungsoo Ahn
Jinwoo Shin
21
189
0
15 Oct 2020
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
42
87
0
07 Oct 2020
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
Dingqing Yang
Amin Ghasemazar
X. Ren
Maximilian Golub
G. Lemieux
Mieszko Lis
22
48
0
23 Sep 2020
Sparse Linear Networks with a Fixed Butterfly Structure: Theory and Practice
Nir Ailon
Omer Leibovitch
Vineet Nair
15
14
0
17 Jul 2020
Supermasks in Superposition
Mitchell Wortsman
Vivek Ramanujan
Rosanne Liu
Aniruddha Kembhavi
Mohammad Rastegari
J. Yosinski
Ali Farhadi
SSL
CLL
33
281
0
26 Jun 2020
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip Torr
Grégory Rogez
P. Dokania
31
95
0
16 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
47
98
0
05 Jun 2020
Previous
1
2
3
Next