ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.04780
  4. Cited By
Scalable Training of Artificial Neural Networks with Adaptive Sparse
  Connectivity inspired by Network Science

Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science

15 July 2017
Decebal Constantin Mocanu
Elena Mocanu
Peter Stone
Phuong H. Nguyen
M. Gibescu
A. Liotta
ArXivPDFHTML

Papers citing "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science"

50 / 114 papers shown
Title
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Chuan Sun
Han Yu
Lizhen Cui
Xiaoxiao Li
190
0
0
03 May 2025
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
86
0
0
30 Apr 2025
Sculpting Memory: Multi-Concept Forgetting in Diffusion Models via Dynamic Mask and Concept-Aware Optimization
Sculpting Memory: Multi-Concept Forgetting in Diffusion Models via Dynamic Mask and Concept-Aware Optimization
Gen Li
Yang Xiao
Jie Ji
Kaiyuan Deng
Bo Hui
Linke Guo
Xiaolong Ma
29
0
0
12 Apr 2025
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
Boqian Wu
Q. Xiao
Shiwei Liu
Lu Yin
Mykola Pechenizkiy
Decebal Constantin Mocanu
M. V. Keulen
Elena Mocanu
MedIm
65
4
0
20 Feb 2025
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
58
1
0
05 Feb 2025
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Yingtao Zhang
Jialin Zhao
Wenjing Wu
Ziheng Liao
Umberto Michieli
C. Cannistraci
58
0
0
31 Jan 2025
Symmetric Pruning of Large Language Models
Symmetric Pruning of Large Language Models
Kai Yi
Peter Richtárik
AAML
VLM
73
0
0
31 Jan 2025
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
Tianjin Huang
Ziquan Zhu
Gaojie Jin
Lu Liu
Zhangyang Wang
Shiwei Liu
52
1
0
12 Jan 2025
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Andy Li
A. Durrant
Milan Markovic
Lu Yin
Georgios Leontidis
Tianlong Chen
Lu Yin
Georgios Leontidis
80
0
0
20 Nov 2024
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Elia Cunegatti
Leonardo Lucio Custode
Giovanni Iacca
52
0
0
11 Nov 2024
Layer-Adaptive State Pruning for Deep State Space Models
Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak
Seongrok Moon
Joohwan Ko
PooGyeon Park
30
0
0
05 Nov 2024
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
Nasib Ullah
Erik Schultheis
Mike Lasby
Yani Andrew Ioannou
Rohit Babbar
37
0
0
05 Nov 2024
EntryPrune: Neural Network Feature Selection using First Impressions
EntryPrune: Neural Network Feature Selection using First Impressions
Felix Zimmer
Patrik Okanovic
Torsten Hoefler
31
0
0
03 Oct 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
Decebal Constantin Mocanu
Elena Mocanu
OOD
3DH
60
0
0
03 Oct 2024
LLM-Barber: Block-Aware Rebuilder for Sparsity Mask in One-Shot for
  Large Language Models
LLM-Barber: Block-Aware Rebuilder for Sparsity Mask in One-Shot for Large Language Models
Yupeng Su
Ziyi Guan
Xiaoqun Liu
Tianlai Jin
Dongkuan Wu
G. Chesi
Ngai Wong
Hao Yu
45
1
0
20 Aug 2024
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model
Mingcan Xiang
Steven Jiaxun Tang
Qizheng Yang
Hui Guan
Tongping Liu
VLM
46
0
0
07 Aug 2024
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
Calarina Muslimani
Bram Grooten
Deepak Ranganatha Sastry Mamillapalli
Mykola Pechenizkiy
Decebal Constantin Mocanu
Matthew E. Taylor
54
0
0
10 Jun 2024
Scorch: A Library for Sparse Deep Learning
Scorch: A Library for Sparse Deep Learning
Bobby Yan
Alexander J. Root
Trevor Gale
David Broman
Fredrik Kjolstad
38
0
0
27 May 2024
Neural Network Compression for Reinforcement Learning Tasks
Neural Network Compression for Reinforcement Learning Tasks
Dmitry A. Ivanov
D. Larionov
Oleg V. Maslennikov
V. Voevodin
OffRL
AI4CE
55
0
0
13 May 2024
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
49
0
0
29 Mar 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
55
10
0
23 Dec 2023
Towards Sobolev Pruning
Towards Sobolev Pruning
Neil Kichler
Sher Afghan
U. Naumann
23
0
0
06 Dec 2023
One is More: Diverse Perspectives within a Single Network for Efficient
  DRL
One is More: Diverse Perspectives within a Single Network for Efficient DRL
Yiqin Tan
Ling Pan
Longbo Huang
OffRL
43
0
0
21 Oct 2023
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Yuxin Zhang
Lirui Zhao
Mingbao Lin
Yunyun Sun
Yiwu Yao
Xingjia Han
Jared Tanner
Shiwei Liu
Rongrong Ji
SyDa
45
40
0
13 Oct 2023
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for
  Pruning LLMs to High Sparsity
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
Lu Yin
You Wu
Zhenyu Zhang
Cheng-Yu Hsieh
Yaqing Wang
...
Mykola Pechenizkiy
Yi Liang
Michael Bendersky
Zhangyang Wang
Shiwei Liu
39
79
0
08 Oct 2023
No Train No Gain: Revisiting Efficient Training Algorithms For
  Transformer-based Language Models
No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models
Jean Kaddour
Oscar Key
Piotr Nawrot
Pasquale Minervini
Matt J. Kusner
32
41
0
12 Jul 2023
Structural Restricted Boltzmann Machine for image denoising and
  classification
Structural Restricted Boltzmann Machine for image denoising and classification
Arkaitz Bidaurrazaga
A. Pérez
Roberto Santana
AI4CE
20
0
0
16 Jun 2023
Magnitude Attention-based Dynamic Pruning
Magnitude Attention-based Dynamic Pruning
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
43
2
0
08 Jun 2023
Towards Memory-Efficient Training for Extremely Large Output Spaces --
  Learning with 500k Labels on a Single Commodity GPU
Towards Memory-Efficient Training for Extremely Large Output Spaces -- Learning with 500k Labels on a Single Commodity GPU
Erik Schultheis
Rohit Babbar
30
4
0
06 Jun 2023
ESL-SNNs: An Evolutionary Structure Learning Strategy for Spiking Neural
  Networks
ESL-SNNs: An Evolutionary Structure Learning Strategy for Spiking Neural Networks
Jiangrong Shen
Qi Xu
Jian K. Liu
Yueming Wang
Gang Pan
Huajin Tang
30
42
0
06 Jun 2023
Adaptive Sparsity Level during Training for Efficient Time Series
  Forecasting with Transformers
Adaptive Sparsity Level during Training for Efficient Time Series Forecasting with Transformers
Zahra Atashgahi
Mykola Pechenizkiy
Raymond N. J. Veldhuis
Decebal Constantin Mocanu
AI4TS
AI4CE
34
1
0
28 May 2023
Gradient Sparsification for Efficient Wireless Federated Learning with
  Differential Privacy
Gradient Sparsification for Efficient Wireless Federated Learning with Differential Privacy
Kang Wei
Jun Li
Chuan Ma
Ming Ding
Feng Shu
Haitao Zhao
Wen Chen
Hongbo Zhu
FedML
35
4
0
09 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
44
19
0
06 Apr 2023
Beyond Multilayer Perceptrons: Investigating Complex Topologies in
  Neural Networks
Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks
T. Boccato
Matteo Ferrante
A. Duggento
N. Toschi
36
2
0
31 Mar 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training
  Efficiency
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
41
3
0
21 Mar 2023
Evolutionary Deep Nets for Non-Intrusive Load Monitoring
Evolutionary Deep Nets for Non-Intrusive Load Monitoring
Jinsong Wang
K. Loparo
21
0
0
06 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
37
29
0
03 Mar 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
48
9
0
28 Feb 2023
A Unified Framework for Soft Threshold Pruning
A Unified Framework for Soft Threshold Pruning
Yanqing Chen
Zhengyu Ma
Wei Fang
Xiawu Zheng
Zhaofei Yu
Yonghong Tian
88
19
0
25 Feb 2023
GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous
  Structured Pruning for Vision Transformer
GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous Structured Pruning for Vision Transformer
Miao Yin
Burak Uzkent
Yilin Shen
Hongxia Jin
Bo Yuan
ViT
32
13
0
13 Jan 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient
  Correction
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
42
6
0
09 Jan 2023
Dynamic Sparse Network for Time Series Classification: Learning What to
  "see''
Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Qiao Xiao
Boqian Wu
Yu Zhang
Shiwei Liu
Mykola Pechenizkiy
Elena Mocanu
Decebal Constantin Mocanu
AI4TS
43
28
0
19 Dec 2022
Dynamic Sparse Training via Balancing the Exploration-Exploitation
  Trade-off
Dynamic Sparse Training via Balancing the Exploration-Exploitation Trade-off
Shaoyi Huang
Bowen Lei
Dongkuan Xu
Hongwu Peng
Yue Sun
Mimi Xie
Caiwen Ding
29
19
0
30 Nov 2022
You Can Have Better Graph Neural Networks by Not Training Weights at
  All: Finding Untrained GNNs Tickets
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
Tianjin Huang
Tianlong Chen
Meng Fang
Vlado Menkovski
Jiaxu Zhao
...
Yulong Pei
Decebal Constantin Mocanu
Zhangyang Wang
Mykola Pechenizkiy
Shiwei Liu
GNN
52
14
0
28 Nov 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
33
3
0
28 Oct 2022
Gradient-based Weight Density Balancing for Robust Dynamic Sparse
  Training
Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training
Mathias Parger
Alexander Ertl
Paul Eibensteiner
J. H. Mueller
Martin Winter
M. Steinberger
34
0
0
25 Oct 2022
Packed-Ensembles for Efficient Uncertainty Estimation
Packed-Ensembles for Efficient Uncertainty Estimation
Olivier Laurent
Adrien Lafage
Enzo Tartaglione
Geoffrey Daniel
Jean-Marc Martinez
Andrei Bursuc
Gianni Franchi
OODD
46
32
0
17 Oct 2022
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency
  of Adapters
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
Shwai He
Liang Ding
Daize Dong
Miao Zhang
Dacheng Tao
MoE
37
87
0
09 Oct 2022
Optimizing Connectivity through Network Gradients for Restricted
  Boltzmann Machines
Optimizing Connectivity through Network Gradients for Restricted Boltzmann Machines
A. C. N. D. Oliveira
Daniel R. Figueiredo
27
0
0
14 Sep 2022
123
Next