ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.11134
  4. Cited By
Rigging the Lottery: Making All Tickets Winners

Rigging the Lottery: Making All Tickets Winners

25 November 2019
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
ArXivPDFHTML

Papers citing "Rigging the Lottery: Making All Tickets Winners"

50 / 147 papers shown
Title
Are Spatial-Temporal Graph Convolution Networks for Human Action Recognition Over-Parameterized?
Are Spatial-Temporal Graph Convolution Networks for Human Action Recognition Over-Parameterized?
Jianyang Xie
Yitian Zhao
Y. Meng
He Zhao
Anh Nguyen
Yalin Zheng
22
0
0
15 May 2025
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Chuan Sun
Han Yu
Lizhen Cui
Xiaoxiao Li
178
0
0
03 May 2025
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
86
0
0
30 Apr 2025
Periodic Online Testing for Sparse Systolic Tensor Arrays
Periodic Online Testing for Sparse Systolic Tensor Arrays
C. Peltekis
Chrysostomos Nicopoulos
G. Dimitrakopoulos
52
0
0
25 Apr 2025
Sculpting Memory: Multi-Concept Forgetting in Diffusion Models via Dynamic Mask and Concept-Aware Optimization
Sculpting Memory: Multi-Concept Forgetting in Diffusion Models via Dynamic Mask and Concept-Aware Optimization
Gen Li
Yang Xiao
Jie Ji
Kaiyuan Deng
Bo Hui
Linke Guo
Xiaolong Ma
29
0
0
12 Apr 2025
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
Boqian Wu
Q. Xiao
Shiwei Liu
Lu Yin
Mykola Pechenizkiy
Decebal Constantin Mocanu
M. V. Keulen
Elena Mocanu
MedIm
65
4
0
20 Feb 2025
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
58
1
0
05 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
115
0
0
04 Feb 2025
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Yingtao Zhang
Jialin Zhao
Wenjing Wu
Ziheng Liao
Umberto Michieli
C. Cannistraci
58
0
0
31 Jan 2025
Symmetric Pruning of Large Language Models
Symmetric Pruning of Large Language Models
Kai Yi
Peter Richtárik
AAML
VLM
73
0
0
31 Jan 2025
Information Consistent Pruning: How to Efficiently Search for Sparse Networks?
Soheil Gharatappeh
Salimeh Yasaei Sekeh
59
0
0
28 Jan 2025
Meta-Sparsity: Learning Optimal Sparse Structures in Multi-task Networks through Meta-learning
Meta-Sparsity: Learning Optimal Sparse Structures in Multi-task Networks through Meta-learning
Richa Upadhyay
Ronald Phlypo
Rajkumar Saini
Marcus Liwicki
42
0
0
21 Jan 2025
DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting
Hao Wu
Haomin Wen
Guibin Zhang
Yutong Xia
Kai Wang
Keli Zhang
Yu Zheng
Kun Wang
73
2
0
17 Jan 2025
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
Tianjin Huang
Ziquan Zhu
Gaojie Jin
Lu Liu
Zhangyang Wang
Shiwei Liu
50
1
0
12 Jan 2025
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Andy Li
A. Durrant
Milan Markovic
Lu Yin
Georgios Leontidis
Tianlong Chen
Lu Yin
Georgios Leontidis
80
0
0
20 Nov 2024
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Zeroth-Order Adaptive Neuron Alignment Based Pruning without Re-Training
Elia Cunegatti
Leonardo Lucio Custode
Giovanni Iacca
52
0
0
11 Nov 2024
Layer-Adaptive State Pruning for Deep State Space Models
Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak
Seongrok Moon
Joohwan Ko
PooGyeon Park
30
0
0
05 Nov 2024
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
Navigating Extremes: Dynamic Sparsity in Large Output Spaces
Nasib Ullah
Erik Schultheis
Mike Lasby
Yani Andrew Ioannou
Rohit Babbar
37
0
0
05 Nov 2024
Targeted Vaccine: Safety Alignment for Large Language Models against Harmful Fine-Tuning via Layer-wise Perturbation
Targeted Vaccine: Safety Alignment for Large Language Models against Harmful Fine-Tuning via Layer-wise Perturbation
Guozhi Liu
Weiwei Lin
Tiansheng Huang
Ruichao Mo
Qi Mu
Li Shen
AAML
69
11
0
13 Oct 2024
Neuroplastic Expansion in Deep Reinforcement Learning
Neuroplastic Expansion in Deep Reinforcement Learning
Jiashun Liu
J. Obando-Ceron
Rameswar Panda
L. Pan
44
3
0
10 Oct 2024
Learning Emergence of Interaction Patterns across Independent RL Agents
  in Multi-Agent Environments
Learning Emergence of Interaction Patterns across Independent RL Agents in Multi-Agent Environments
Vasanth Reddy Baddam
Suat Gumussoy
Almuatazbellah Boker
Hoda Eldardiry
OffRL
AI4CE
26
0
0
03 Oct 2024
EntryPrune: Neural Network Feature Selection using First Impressions
EntryPrune: Neural Network Feature Selection using First Impressions
Felix Zimmer
Patrik Okanovic
Torsten Hoefler
29
0
0
03 Oct 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
Decebal Constantin Mocanu
Elena Mocanu
OOD
3DH
60
0
0
03 Oct 2024
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
Stephen Zhang
Vardan Papyan
VLM
51
1
0
20 Sep 2024
S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training
S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training
Yuezhou Hu
Jun-Jie Zhu
Jianfei Chen
53
0
0
13 Sep 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
49
3
0
19 Aug 2024
LPViT: Low-Power Semi-structured Pruning for Vision Transformers
LPViT: Low-Power Semi-structured Pruning for Vision Transformers
Kaixin Xu
Zhe Wang
Chunyun Chen
Xue Geng
Jie Lin
Xulei Yang
Min-man Wu
Min Wu
Xiaoli Li
Weisi Lin
ViT
VLM
51
7
0
02 Jul 2024
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
Calarina Muslimani
Bram Grooten
Deepak Ranganatha Sastry Mamillapalli
Mykola Pechenizkiy
Decebal Constantin Mocanu
Matthew E. Taylor
54
0
0
10 Jun 2024
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
46
6
0
31 May 2024
Unmasking Efficiency: Learning Salient Sparse Models in Non-IID
  Federated Learning
Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning
Riyasat Ohib
Bishal Thapaliya
Gintare Karolina Dziugaite
Jingyu Liu
Vince D. Calhoun
Sergey Plis
FedML
32
1
0
15 May 2024
Neural Network Compression for Reinforcement Learning Tasks
Neural Network Compression for Reinforcement Learning Tasks
Dmitry A. Ivanov
D. Larionov
Oleg V. Maslennikov
V. Voevodin
OffRL
AI4CE
55
0
0
13 May 2024
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity
  Allocation with Global Constraint in Minutes
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes
Ruihao Gong
Yang Yong
Zining Wang
Jinyang Guo
Xiuying Wei
Yuqing Ma
Xianglong Liu
54
5
0
09 May 2024
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression
  and Deployment Toolkit for Prototype Hardware Accelerator Design
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator Design
Jian Meng
Yuan Liao
Anupreetham Anupreetham
Ahmed Hassan
Shixing Yu
Han-Sok Suh
Xiaofeng Hu
Jae-sun Seo
MQ
51
2
0
02 May 2024
Decentralized Personalized Federated Learning based on a Conditional
  Sparse-to-Sparser Scheme
Decentralized Personalized Federated Learning based on a Conditional Sparse-to-Sparser Scheme
Qianyu Long
Qiyuan Wang
Christos Anagnostopoulos
Daning Bi
FedML
28
0
0
24 Apr 2024
Multilingual Brain Surgeon: Large Language Models Can be Compressed Leaving No Language Behind
Multilingual Brain Surgeon: Large Language Models Can be Compressed Leaving No Language Behind
Hongchuan Zeng
Hongshen Xu
Lu Chen
Kai Yu
59
5
0
06 Apr 2024
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real
  World
Embracing Unknown Step by Step: Towards Reliable Sparse Training in Real World
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Bani Mallick
UQCV
46
0
0
29 Mar 2024
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
T. Yasuda
Kyriakos Axiotis
Gang Fu
M. Bateni
Vahab Mirrokni
47
0
0
27 Feb 2024
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Always-Sparse Training by Growing Connections with Guided Stochastic Exploration
Mike Heddes
Narayan Srinivasa
T. Givargis
Alexandru Nicolau
91
0
0
12 Jan 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
55
10
0
23 Dec 2023
MaxQ: Multi-Axis Query for N:M Sparsity Network
MaxQ: Multi-Axis Query for N:M Sparsity Network
Jingyang Xiang
Siqi Li
Junhao Chen
Zhuangzhi Chen
Tianxin Huang
Linpeng Peng
Yong-Jin Liu
18
0
0
12 Dec 2023
One is More: Diverse Perspectives within a Single Network for Efficient
  DRL
One is More: Diverse Perspectives within a Single Network for Efficient DRL
Yiqin Tan
Ling Pan
Longbo Huang
OffRL
40
0
0
21 Oct 2023
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Yuxin Zhang
Lirui Zhao
Mingbao Lin
Yunyun Sun
Yiwu Yao
Xingjia Han
Jared Tanner
Shiwei Liu
Rongrong Ji
SyDa
45
40
0
13 Oct 2023
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for
  Pruning LLMs to High Sparsity
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
Lu Yin
You Wu
Zhenyu Zhang
Cheng-Yu Hsieh
Yaqing Wang
...
Mykola Pechenizkiy
Yi Liang
Michael Bendersky
Zhangyang Wang
Shiwei Liu
36
79
0
08 Oct 2023
FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental
  Regularization
FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization
Qianyu Long
Christos Anagnostopoulos
S. P. Parambath
Daning Bi
AI4CE
FedML
23
2
0
13 Sep 2023
Uncovering the Hidden Cost of Model Compression
Uncovering the Hidden Cost of Model Compression
Diganta Misra
Muawiz Chaudhary
Agam Goyal
Bharat Runwal
Pin-Yu Chen
VLM
38
0
0
29 Aug 2023
The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive
  field
The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field
Kun Wang
Guohao Li
Shilong Wang
Guibin Zhang
Kaidi Wang
Yang You
Xiaojiang Peng
Keli Zhang
Yang Wang
42
8
0
19 Aug 2023
No Train No Gain: Revisiting Efficient Training Algorithms For
  Transformer-based Language Models
No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models
Jean Kaddour
Oscar Key
Piotr Nawrot
Pasquale Minervini
Matt J. Kusner
32
41
0
12 Jul 2023
A Simple and Effective Pruning Approach for Large Language Models
A Simple and Effective Pruning Approach for Large Language Models
Mingjie Sun
Zhuang Liu
Anna Bair
J. Zico Kolter
87
361
0
20 Jun 2023
Spatial Re-parameterization for N:M Sparsity
Spatial Re-parameterization for N:M Sparsity
Yuxin Zhang
Mingbao Lin
Mingliang Xu
Yonghong Tian
Rongrong Ji
46
2
0
09 Jun 2023
Towards Memory-Efficient Training for Extremely Large Output Spaces --
  Learning with 500k Labels on a Single Commodity GPU
Towards Memory-Efficient Training for Extremely Large Output Spaces -- Learning with 500k Labels on a Single Commodity GPU
Erik Schultheis
Rohit Babbar
30
4
0
06 Jun 2023
123
Next