ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02340
  4. Cited By
SNIP: Single-shot Network Pruning based on Connection Sensitivity

SNIP: Single-shot Network Pruning based on Connection Sensitivity

4 October 2018
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
    VLM
ArXivPDFHTML

Papers citing "SNIP: Single-shot Network Pruning based on Connection Sensitivity"

50 / 708 papers shown
Title
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Ibne Farabi Shihab
Sanjeda Akter
Anuj Sharma
Mamba
54
0
0
13 May 2025
L-SWAG: Layer-Sample Wise Activation with Gradients information for Zero-Shot NAS on Vision Transformers
L-SWAG: Layer-Sample Wise Activation with Gradients information for Zero-Shot NAS on Vision Transformers
S. Casarin
Sergio Escalera
Oswald Lanz
34
0
0
12 May 2025
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Chuan Sun
Han Yu
Lizhen Cui
Xiaoxiao Li
143
0
0
03 May 2025
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
86
0
0
30 Apr 2025
Efficient Adaptation of Deep Neural Networks for Semantic Segmentation in Space Applications
Efficient Adaptation of Deep Neural Networks for Semantic Segmentation in Space Applications
Leonardo Olivi
Edoardo Santero Mormile
Enzo Tartaglione
SSeg
35
0
0
22 Apr 2025
Sign-In to the Lottery: Reparameterizing Sparse Training From Scratch
Sign-In to the Lottery: Reparameterizing Sparse Training From Scratch
Advait Gadhikar
Tom Jacobs
Chao Zhou
R. Burkholz
32
0
0
17 Apr 2025
You Don't Need All Attentions: Distributed Dynamic Fine-Tuning for Foundation Models
You Don't Need All Attentions: Distributed Dynamic Fine-Tuning for Foundation Models
Shiwei Ding
Lan Zhang
Zhenlin Wang
Giuseppe Ateniese
Xiaoyong Yuan
39
0
0
16 Apr 2025
CUT: Pruning Pre-Trained Multi-Task Models into Compact Models for Edge Devices
CUT: Pruning Pre-Trained Multi-Task Models into Compact Models for Edge Devices
Jingxuan Zhou
Weidong Bao
Ji Wang
Zhengyi Zhong
32
0
0
14 Apr 2025
Hyperflows: Pruning Reveals the Importance of Weights
Hyperflows: Pruning Reveals the Importance of Weights
Eugen Barbulescu
Antonio Alexoaie
31
0
0
06 Apr 2025
Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models
Sensitivity Meets Sparsity: The Impact of Extremely Sparse Parameter Patterns on Theory-of-Mind of Large Language Models
Yuheng Wu
Wentao Guo
Zirui Liu
Heng Ji
Zhaozhuo Xu
Denghui Zhang
38
0
0
05 Apr 2025
The Effects of Grouped Structural Global Pruning of Vision Transformers on Domain Generalisation
The Effects of Grouped Structural Global Pruning of Vision Transformers on Domain Generalisation
Hamza Riaz
Alan F. Smeaton
ViT
35
0
0
05 Apr 2025
FedPaI: Achieving Extreme Sparsity in Federated Learning via Pruning at Initialization
FedPaI: Achieving Extreme Sparsity in Federated Learning via Pruning at Initialization
Haonan Wang
Ziqiang Liu
Kajimusugura Hoshino
Tuo Zhang
J. Walters
S. Crago
49
0
0
01 Apr 2025
Efficient Continual Learning through Frequency Decomposition and Integration
Efficient Continual Learning through Frequency Decomposition and Integration
Ruiqi Liu
Boyu Diao
Libo Huang
Hangda Liu
Chuanguang Yang
Zhulin An
Yongjun Xu
CLL
40
0
0
28 Mar 2025
An Efficient Training Algorithm for Models with Block-wise Sparsity
An Efficient Training Algorithm for Models with Block-wise Sparsity
Ding Zhu
Zhiqun Zuo
Mohammad Mahdi Khalili
42
0
0
27 Mar 2025
As easy as PIE: understanding when pruning causes language models to disagree
As easy as PIE: understanding when pruning causes language models to disagree
Pietro Tropeano
Maria Maistro
Tuukka Ruotsalo
Christina Lioma
60
0
0
27 Mar 2025
RBFleX-NAS: Training-Free Neural Architecture Search Using Radial Basis Function Kernel and Hyperparameter Detection
RBFleX-NAS: Training-Free Neural Architecture Search Using Radial Basis Function Kernel and Hyperparameter Detection
Tomomasa Yamasaki
Zhehui Wang
Tao Luo
Niangjun Chen
Bo Wang
32
0
0
26 Mar 2025
ZeroLM: Data-Free Transformer Architecture Search for Language Models
ZeroLM: Data-Free Transformer Architecture Search for Language Models
Zhen-Song Chen
Hong-Wei Ding
Xian-Jia Wang
Witold Pedrycz
55
0
0
24 Mar 2025
Finding Stable Subnetworks at Initialization with Dataset Distillation
Finding Stable Subnetworks at Initialization with Dataset Distillation
Luke McDermott
Rahul Parhi
DD
58
0
0
23 Mar 2025
Safe Vision-Language Models via Unsafe Weights Manipulation
Safe Vision-Language Models via Unsafe Weights Manipulation
Moreno DÍncà
E. Peruzzo
Xingqian Xu
Humphrey Shi
N. Sebe
Massimiliano Mancini
MU
60
0
0
14 Mar 2025
Poly-MgNet: Polynomial Building Blocks in Multigrid-Inspired ResNets
Antonia van Betteray
Matthias Rottmann
Karsten Kahl
51
0
0
13 Mar 2025
ResMoE: Space-efficient Compression of Mixture of Experts LLMs via Residual Restoration
Mengting Ai
Tianxin Wei
Yifan Chen
Zhichen Zeng
Ritchie Zhao
G. Varatkar
B. Rouhani
Xianfeng Tang
Hanghang Tong
Jingrui He
MoE
51
1
0
10 Mar 2025
Learning to Unlearn while Retaining: Combating Gradient Conflicts in Machine Unlearning
Gaurav Patel
Qiang Qiu
MU
62
1
0
08 Mar 2025
Keeping Yourself is Important in Downstream Tuning Multimodal Large Language Model
Wenke Huang
Jian Liang
Xianda Guo
Yiyang Fang
Guancheng Wan
...
Bin Yang
He Li
Jiawei Shao
Mang Ye
Bo Du
OffRL
LRM
MLLM
KELM
VLM
65
1
0
06 Mar 2025
FoCTTA: Low-Memory Continual Test-Time Adaptation with Focus
FoCTTA: Low-Memory Continual Test-Time Adaptation with Focus
Youbing Hu
Yun Cheng
Zimu Zhou
Anqi Lu
Zhiqiang Cao
Zhijun Li
TTA
61
0
0
28 Feb 2025
LED-Merging: Mitigating Safety-Utility Conflicts in Model Merging with Location-Election-Disjoint
LED-Merging: Mitigating Safety-Utility Conflicts in Model Merging with Location-Election-Disjoint
Qianli Ma
Dongrui Liu
Qian Chen
Linfeng Zhang
Jing Shao
MoMe
186
0
0
24 Feb 2025
PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning
PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning
Pengcheng Huang
Zhenghao Liu
Yukun Yan
Xiaoyuan Yi
Hao Chen
Zhiyuan Liu
Maosong Sun
Tong Xiao
Ge Yu
Chenyan Xiong
104
1
0
24 Feb 2025
PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation
PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation
Tao Fan
Guoqiang Ma
Yuanfeng Song
Lixin Fan
Kai Chen
Qiang Yang
53
1
0
21 Feb 2025
NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance
NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance
Raphael T. Husistein
Markus Reiher
Marco Eckhoff
142
1
0
20 Feb 2025
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
Boqian Wu
Q. Xiao
Shiwei Liu
Lu Yin
Mykola Pechenizkiy
Decebal Constantin Mocanu
M. V. Keulen
Elena Mocanu
MedIm
65
4
0
20 Feb 2025
Signal Collapse in One-Shot Pruning: When Sparse Models Fail to Distinguish Neural Representations
Signal Collapse in One-Shot Pruning: When Sparse Models Fail to Distinguish Neural Representations
Dhananjay Saikumar
Blesson Varghese
41
0
0
18 Feb 2025
DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs
Minxuan Lv
Zhenpeng Su
Leiyu Pan
Yizhe Xiong
Zijia Lin
...
Guiguang Ding
Cheng Luo
Di Zhang
Kun Gai
Songlin Hu
MoE
43
0
0
18 Feb 2025
Keep what you need : extracting efficient subnetworks from large audio representation models
Keep what you need : extracting efficient subnetworks from large audio representation models
David Genova
P. Esling
Tom Hurlin
80
0
0
18 Feb 2025
Fishing For Cheap And Efficient Pruners At Initialization
Fishing For Cheap And Efficient Pruners At Initialization
Ivo Gollini Navarrete
Nicolas Mauricio Cuadrado
Jose Renato Restom
Martin Takáč
Samuel Horvath
49
0
0
17 Feb 2025
HADL Framework for Noise Resilient Long-Term Time Series Forecasting
HADL Framework for Noise Resilient Long-Term Time Series Forecasting
Aditya Dey
Jonas Kusch
Fadi Al Machot
AI4TS
41
0
0
14 Feb 2025
Training-Free Restoration of Pruned Neural Networks
Training-Free Restoration of Pruned Neural Networks
Keonho Lee
Minsoo Kim
Dong-Wan Choi
54
0
0
06 Feb 2025
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
54
1
0
05 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Yingtao Zhang
Jialin Zhao
Wenjing Wu
Ziheng Liao
Umberto Michieli
C. Cannistraci
53
0
0
31 Jan 2025
Symmetric Pruning of Large Language Models
Symmetric Pruning of Large Language Models
Kai Yi
Peter Richtárik
AAML
VLM
73
0
0
31 Jan 2025
Information Consistent Pruning: How to Efficiently Search for Sparse Networks?
Soheil Gharatappeh
Salimeh Yasaei Sekeh
59
0
0
28 Jan 2025
Implicit Bias in Matrix Factorization and its Explicit Realization in a New Architecture
Yikun Hou
Suvrit Sra
A. Yurtsever
34
0
0
28 Jan 2025
Sparse High Rank Adapters
Sparse High Rank Adapters
K. Bhardwaj
N. Pandey
Sweta Priyadarshi
Viswanath Ganapathy
Rafael Esteves
...
P. Whatmough
Risheek Garrepalli
M. V. Baalen
Harris Teague
Markus Nagel
MQ
43
4
0
28 Jan 2025
Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic
Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic
Yifei He
Yuzheng Hu
Yong Lin
Tong Zhang
Han Zhao
FedML
MoMe
65
19
0
08 Jan 2025
Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights
Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights
Sy-Tuyen Ho
Tuan Van Vo
Somayeh Ebrahimkhani
Ngai-man Cheung
51
0
0
08 Jan 2025
Pruning-based Data Selection and Network Fusion for Efficient Deep Learning
Humaira Kousar
Hasnain Irshad Bhatti
Jaekyun Moon
37
0
0
03 Jan 2025
NLSR: Neuron-Level Safety Realignment of Large Language Models Against
  Harmful Fine-Tuning
NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning
Xin Yi
Shunfan Zheng
Linlin Wang
Gerard de Melo
Xiaoling Wang
Liang He
83
6
0
17 Dec 2024
No More Adam: Learning Rate Scaling at Initialization is All You Need
No More Adam: Learning Rate Scaling at Initialization is All You Need
Minghao Xu
Lichuan Xiang
Xu Cai
Hongkai Wen
84
2
0
16 Dec 2024
Fast Track to Winning Tickets: Repowering One-Shot Pruning for Graph
  Neural Networks
Fast Track to Winning Tickets: Repowering One-Shot Pruning for Graph Neural Networks
Xinfeng Li
Guibin Zhang
Haoran Yang
Dawei Cheng
85
0
0
10 Dec 2024
GradAlign for Training-free Model Performance Inference
GradAlign for Training-free Model Performance Inference
Yuxuan Li
Yunhui Guo
67
0
0
29 Nov 2024
Is Oracle Pruning the True Oracle?
Is Oracle Pruning the True Oracle?
Sicheng Feng
Keda Tao
Haoyu Wang
VLM
70
0
0
28 Nov 2024
1234...131415
Next