ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.16886
  4. Cited By
Understanding Sparse Neural Networks from their Topology via
  Multipartite Graph Representations
v1v2 (latest)

Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations

26 May 2023
Elia Cunegatti
Matteo Farina
Doina Bucur
Giovanni Iacca
ArXiv (abs)PDFHTMLGithub (4★)

Papers citing "Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations"

22 / 22 papers shown
Title
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Elia Cunegatti
Leonardo Lucio Custode
Giovanni Iacca
151
0
0
11 Nov 2024
Why Random Pruning Is All We Need to Start Sparse
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
96
21
0
05 Oct 2022
Rare Gems: Finding Lottery Tickets at Initialization
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
56
42
0
24 Feb 2022
The Unreasonable Effectiveness of Random Pruning: Return of the Most
  Naive Baseline for Sparse Training
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Li Shen
Decebal Constantin Mocanu
Zhangyang Wang
Mykola Pechenizkiy
86
113
0
05 Feb 2022
Leveraging the Graph Structure of Neural Network Training Dynamics
Leveraging the Graph Structure of Neural Network Training Dynamics
Fatemeh Vahedian
Ruiyu Li
Puja Trivedi
Di Jin
Danai Koutra
AI4CEGNN
56
3
0
09 Nov 2021
Connectivity Matters: Neural Network Pruning Through the Lens of
  Effective Sparsity
Connectivity Matters: Neural Network Pruning Through the Lens of Effective Sparsity
Artem Vysogorets
Julia Kempe
102
22
0
05 Jul 2021
Graph Structure of Neural Networks
Graph Structure of Neural Networks
Jiaxuan You
J. Leskovec
Kaiming He
Saining Xie
GNN
84
141
0
13 Jul 2020
Progressive Skeletonization: Trimming more fat from a network at
  initialization
Progressive Skeletonization: Trimming more fat from a network at initialization
Pau de Jorge
Amartya Sanyal
Harkirat Singh Behl
Philip Torr
Grégory Rogez
P. Dokania
105
95
0
16 Jun 2020
Hcore-Init: Neural Network Initialization based on Graph Degeneracy
Hcore-Init: Neural Network Initialization based on Graph Degeneracy
Stratis Limnios
George Dasoulas
D. Thilikos
Michalis Vazirgiannis
GNN
19
8
0
16 Apr 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
112
276
0
03 Feb 2020
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
68
362
0
29 Nov 2019
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets Winners
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
203
607
0
25 Nov 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
149
340
0
10 Jul 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
74
389
0
03 May 2019
Neural Persistence: A Complexity Measure for Deep Neural Networks Using
  Algebraic Topology
Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology
Bastian Rieck
Matteo Togninalli
Christian Bock
Michael Moor
Max Horn
Thomas Gumbsch
Karsten Borgwardt
82
111
0
23 Dec 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
274
1,212
0
04 Oct 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
293
3,489
0
09 Mar 2018
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
202
1,282
0
05 Oct 2017
A Downsampled Variant of ImageNet as an Alternative to the CIFAR
  datasets
A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets
P. Chrabaszcz
I. Loshchilov
Frank Hutter
SSegOOD
179
649
0
27 Jul 2017
Scalable Training of Artificial Neural Networks with Adaptive Sparse
  Connectivity inspired by Network Science
Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science
Decebal Constantin Mocanu
Elena Mocanu
Peter Stone
Phuong H. Nguyen
M. Gibescu
A. Liotta
187
637
0
15 Jul 2017
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
471
43,357
0
11 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
358
18,661
0
06 Feb 2015
1