ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.06870
  4. Cited By
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

14 May 2020
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
ArXivPDFHTML

Papers citing "Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers"

20 / 20 papers shown
Title
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
Hao Wu
Haomin Wen
Guibin Zhang
Yutong Xia
Kai Wang
Keli Zhang
Yu Zheng
Kun Wang
68
2
0
17 Jan 2025
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for
  Large Language Models
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models
Amit Dhurandhar
Tejaswini Pedapati
Ronny Luss
Soham Dan
Aurélie C. Lozano
Payel Das
Georgios Kollias
22
3
0
28 Feb 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
52
10
0
23 Dec 2023
Pursing the Sparse Limitation of Spiking Deep Learning Structures
Pursing the Sparse Limitation of Spiking Deep Learning Structures
Hao-Ran Cheng
Jiahang Cao
Erjia Xiao
Mengshu Sun
Le Yang
Jize Zhang
Xue Lin
B. Kailkhura
Kaidi Xu
Renjing Xu
16
1
0
18 Nov 2023
Elephant Neural Networks: Born to Be a Continual Learner
Elephant Neural Networks: Born to Be a Continual Learner
Qingfeng Lan
A. R. Mahmood
CLL
48
9
0
02 Oct 2023
Magnitude Attention-based Dynamic Pruning
Magnitude Attention-based Dynamic Pruning
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
30
2
0
08 Jun 2023
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude
  Pruning
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Moonseok Choi
Hyungi Lee
G. Nam
Juho Lee
37
2
0
24 May 2023
Boosting Convolutional Neural Networks with Middle Spectrum Grouped
  Convolution
Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
Z. Su
Jiehua Zhang
Tianpeng Liu
Zhen Liu
Shuanghui Zhang
M. Pietikäinen
Li Liu
27
2
0
13 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
39
19
0
06 Apr 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
46
9
0
28 Feb 2023
MiNL: Micro-images based Neural Representation for Light Fields
MiNL: Micro-images based Neural Representation for Light Fields
Ziru Xu
Henan Wang
Zhibo Chen
33
1
0
17 Sep 2022
AdaptCL: Adaptive Continual Learning for Tackling Heterogeneity in
  Sequential Datasets
AdaptCL: Adaptive Continual Learning for Tackling Heterogeneity in Sequential Datasets
Yuqing Zhao
Divya Saxena
Jiannong Cao
CLL
20
12
0
22 Jul 2022
Fine-tuning Language Models over Slow Networks using Activation
  Compression with Guarantees
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Jue Wang
Binhang Yuan
Luka Rimanic
Yongjun He
Tri Dao
Beidi Chen
Christopher Ré
Ce Zhang
AI4CE
24
11
0
02 Jun 2022
Compression-aware Training of Neural Networks using Frank-Wolfe
Compression-aware Training of Neural Networks using Frank-Wolfe
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
29
9
0
24 May 2022
RGP: Neural Network Pruning through Its Regular Graph Structure
RGP: Neural Network Pruning through Its Regular Graph Structure
Zhuangzhi Chen
Jingyang Xiang
Yao Lu
Qi Xuan
Xiaoniu Yang
27
1
0
28 Oct 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
37
111
0
19 Jun 2021
Sparse Training Theory for Scalable and Efficient Agents
Sparse Training Theory for Scalable and Efficient Agents
Decebal Constantin Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
45
17
0
02 Mar 2021
Training highly effective connectivities within neural networks with
  randomly initialized, fixed weights
Training highly effective connectivities within neural networks with randomly initialized, fixed weights
Cristian Ivan
Razvan V. Florian
19
4
0
30 Jun 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
34
73
0
07 Jan 2020
1