ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.00296
  4. Cited By
Powerpropagation: A sparsity inducing weight reparameterisation

Powerpropagation: A sparsity inducing weight reparameterisation

1 October 2021
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
ArXivPDFHTML

Papers citing "Powerpropagation: A sparsity inducing weight reparameterisation"

44 / 44 papers shown
Title
SparsyFed: Sparse Adaptive Federated Training
SparsyFed: Sparse Adaptive Federated Training
Adriano Guastella
Lorenzo Sani
Alex Iacob
Alessio Mora
Paolo Bellavista
Nicholas D. Lane
FedML
31
0
0
07 Apr 2025
An Efficient Row-Based Sparse Fine-Tuning
An Efficient Row-Based Sparse Fine-Tuning
Cen-Jhih Li
Aditya Bhaskara
58
0
0
17 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse
  Training
Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training
Pihe Hu
Shaolong Li
Zhuoran Li
L. Pan
Longbo Huang
21
0
0
28 Sep 2024
Mixed Sparsity Training: Achieving 4$\times$ FLOP Reduction for
  Transformer Pretraining
Mixed Sparsity Training: Achieving 4×\times× FLOP Reduction for Transformer Pretraining
Pihe Hu
Shaolong Li
Longbo Huang
33
0
0
21 Aug 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
47
3
0
19 Aug 2024
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
39
0
0
29 Mar 2024
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
T. Yasuda
Kyriakos Axiotis
Gang Fu
M. Bateni
Vahab Mirrokni
47
0
0
27 Feb 2024
Hierarchical Continual Reinforcement Learning via Large Language Model
Hierarchical Continual Reinforcement Learning via Large Language Model
Chaofan Pan
Xin Yang
Hao Wang
Wei Wei
Tianrui Li
28
2
0
25 Jan 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters
Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters
James Seale Smith
Yen-Chang Hsu
Z. Kira
Yilin Shen
Hongxia Jin
DiffM
30
6
0
30 Nov 2023
Towards Higher Ranks via Adversarial Weight Pruning
Towards Higher Ranks via Adversarial Weight Pruning
Yuchuan Tian
Hanting Chen
Tianyu Guo
Chao Xu
Yunhe Wang
35
2
0
29 Nov 2023
Towards guarantees for parameter isolation in continual learning
Towards guarantees for parameter isolation in continual learning
Giulia Lanzillotta
Sidak Pal Singh
Benjamin Grewe
Thomas Hofmann
32
0
0
02 Oct 2023
Scaling Laws for Sparsely-Connected Foundation Models
Scaling Laws for Sparsely-Connected Foundation Models
Elias Frantar
C. Riquelme
N. Houlsby
Dan Alistarh
Utku Evci
35
36
0
15 Sep 2023
The Quest of Finding the Antidote to Sparse Double Descent
The Quest of Finding the Antidote to Sparse Double Descent
Victor Quétu
Marta Milovanović
34
0
0
31 Aug 2023
HyperSparse Neural Networks: Shifting Exploration to Exploitation
  through Adaptive Regularization
HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization
Patrick Glandorf
Timo Kaiser
Bodo Rosenhahn
44
5
0
14 Aug 2023
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Denis Kuznedelev
Eldar Kurtic
Eugenia Iofinova
Elias Frantar
Alexandra Peste
Dan Alistarh
VLM
35
11
0
03 Aug 2023
Weight Compander: A Simple Weight Reparameterization for Regularization
Weight Compander: A Simple Weight Reparameterization for Regularization
Rinor Cakaj
Jens Mehnert
B. Yang
27
1
0
29 Jun 2023
The Emergence of Essential Sparsity in Large Pre-trained Models: The
  Weights that Matter
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Ajay Jaiswal
Shiwei Liu
Tianlong Chen
Zhangyang Wang
VLM
29
33
0
06 Jun 2023
Dynamic Sparsity Is Channel-Level Sparsity Learner
Dynamic Sparsity Is Channel-Level Sparsity Learner
Lu Yin
Gen Li
Meng Fang
Lijuan Shen
Tianjin Huang
Zhangyang Wang
Vlado Menkovski
Xiaolong Ma
Mykola Pechenizkiy
Shiwei Liu
33
20
0
30 May 2023
Synaptic Weight Distributions Depend on the Geometry of Plasticity
Synaptic Weight Distributions Depend on the Geometry of Plasticity
Roman Pogodin
Jonathan H. Cornford
Arna Ghosh
Gauthier Gidel
Guillaume Lajoie
Blake A. Richards
23
4
0
30 May 2023
AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Abhisek Kundu
Naveen Mellempudi
Dharma Teja Vooturi
Bharat Kaul
Pradeep Dubey
31
1
0
14 Apr 2023
Factorizers for Distributed Sparse Block Codes
Factorizers for Distributed Sparse Block Codes
Michael Hersche
Aleksandar Terzić
G. Karunaratne
Jovin Langenegger
Angeline Pouget
G. Cherubini
Luca Benini
Abu Sebastian
Abbas Rahimi
39
4
0
24 Mar 2023
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!
Shiwei Liu
Tianlong Chen
Zhenyu Zhang
Xuxi Chen
Tianjin Huang
Ajay Jaiswal
Zhangyang Wang
32
29
0
03 Mar 2023
A Unified Framework for Soft Threshold Pruning
A Unified Framework for Soft Threshold Pruning
Yanqing Chen
Zhengyu Ma
Wei Fang
Xiawu Zheng
Zhaofei Yu
Yonghong Tian
85
19
0
25 Feb 2023
Calibrating the Rigged Lottery: Making All Tickets Reliable
Calibrating the Rigged Lottery: Making All Tickets Reliable
Bowen Lei
Ruqi Zhang
Dongkuan Xu
Bani Mallick
UQCV
16
7
0
18 Feb 2023
SparseProp: Efficient Sparse Backpropagation for Faster Training of
  Neural Networks
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks
Mahdi Nikdan
Tommaso Pegolotti
Eugenia Iofinova
Eldar Kurtic
Dan Alistarh
26
11
0
09 Feb 2023
Ten Lessons We Have Learned in the New "Sparseland": A Short Handbook
  for Sparse Neural Network Researchers
Ten Lessons We Have Learned in the New "Sparseland": A Short Handbook for Sparse Neural Network Researchers
Shiwei Liu
Zhangyang Wang
32
30
0
06 Feb 2023
Implicit Regularization for Group Sparsity
Implicit Regularization for Group Sparsity
Jiangyuan Li
THANH VAN NGUYEN
C. Hegde
Raymond K. W. Wong
40
9
0
29 Jan 2023
Modality-Agnostic Variational Compression of Implicit Neural
  Representations
Modality-Agnostic Variational Compression of Implicit Neural Representations
Jonathan Richard Schwarz
Jihoon Tack
Yee Whye Teh
Jaeho Lee
Jinwoo Shin
34
25
0
23 Jan 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient
  Correction
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
37
6
0
09 Jan 2023
Lifelong Reinforcement Learning with Modulating Masks
Lifelong Reinforcement Learning with Modulating Masks
Eseoghene Ben-Iwhiwhu
Saptarshi Nath
Praveen K. Pilly
Soheil Kolouri
Andrea Soltoggio
CLL
OffRL
34
20
0
21 Dec 2022
Dynamic Sparse Network for Time Series Classification: Learning What to
  "see''
Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Qiao Xiao
Boqian Wu
Yu Zhang
Shiwei Liu
Mykola Pechenizkiy
Elena Mocanu
Decebal Constantin Mocanu
AI4TS
38
28
0
19 Dec 2022
Building a Subspace of Policies for Scalable Continual Learning
Building a Subspace of Policies for Scalable Continual Learning
Jean-Baptiste Gaya
T. Doan
Lucas Caccia
Laure Soulier
Ludovic Denoyer
Roberta Raileanu
CLL
39
29
0
18 Nov 2022
Efficient Multi-Prize Lottery Tickets: Enhanced Accuracy, Training, and
  Inference Speed
Efficient Multi-Prize Lottery Tickets: Enhanced Accuracy, Training, and Inference Speed
Hao-Ran Cheng
Pu Zhao
Yize Li
Xue Lin
James Diffenderfer
R. Goldhahn
B. Kailkhura
MQ
33
0
0
26 Sep 2022
Hebbian Continual Representation Learning
Hebbian Continual Representation Learning
P. Morawiecki
Andrii Krutsylo
Maciej Wołczyk
Marek Śmieja
CLL
24
2
0
28 Jun 2022
RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch
RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch
Y. Tan
Pihe Hu
L. Pan
Jiatai Huang
Longbo Huang
OffRL
18
19
0
30 May 2022
Spartan: Differentiable Sparsity via Regularized Transportation
Spartan: Differentiable Sparsity via Regularized Transportation
Kai Sheng Tai
Taipeng Tian
Ser-Nam Lim
31
11
0
27 May 2022
How catastrophic can catastrophic forgetting be in linear regression?
How catastrophic can catastrophic forgetting be in linear regression?
Itay Evron
E. Moroshko
Rachel A. Ward
Nati Srebro
Daniel Soudry
CLL
32
48
0
19 May 2022
Meta-Learning Sparse Compression Networks
Meta-Learning Sparse Compression Networks
Jonathan Richard Schwarz
Yee Whye Teh
62
26
0
18 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
A. P. Condurache
DD
34
20
0
17 May 2022
Sparsity and Heterogeneous Dropout for Continual Learning in the Null
  Space of Neural Activations
Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations
Ali Abbasi
Parsa Nooralinejad
Vladimir Braverman
Hamed Pirsiavash
Soheil Kolouri
CLL
39
16
0
12 Mar 2022
SPDY: Accurate Pruning with Speedup Guarantees
SPDY: Accurate Pruning with Speedup Guarantees
Elias Frantar
Dan Alistarh
41
33
0
31 Jan 2022
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
276
5,675
0
05 Dec 2016
1