ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.02626
  4. Cited By
Learning both Weights and Connections for Efficient Neural Networks

Learning both Weights and Connections for Efficient Neural Networks

8 June 2015
Song Han
Jeff Pool
J. Tran
W. Dally
    CVBM
ArXivPDFHTML

Papers citing "Learning both Weights and Connections for Efficient Neural Networks"

50 / 1,144 papers shown
Title
Improved Methods for Model Pruning and Knowledge Distillation
Improved Methods for Model Pruning and Knowledge Distillation
Wei Jiang
Anying Fu
Youling Zhang
VLM
9
0
0
20 May 2025
Optimal Client Sampling in Federated Learning with Client-Level Heterogeneous Differential Privacy
Optimal Client Sampling in Federated Learning with Client-Level Heterogeneous Differential Privacy
Jiahao Xu
Rui Hu
Olivera Kotevska
FedML
29
0
0
19 May 2025
BINGO: A Novel Pruning Mechanism to Reduce the Size of Neural Networks
BINGO: A Novel Pruning Mechanism to Reduce the Size of Neural Networks
Aditya Panangat
23
0
0
15 May 2025
ILIF: Temporal Inhibitory Leaky Integrate-and-Fire Neuron for Overactivation in Spiking Neural Networks
ILIF: Temporal Inhibitory Leaky Integrate-and-Fire Neuron for Overactivation in Spiking Neural Networks
Kai Sun
Peibo Duan
Levin Kuhlmann
Beilun Wang
Bin Zhang
29
0
0
15 May 2025
PDE: Gene Effect Inspired Parameter Dynamic Evolution for Low-light Image Enhancement
PDE: Gene Effect Inspired Parameter Dynamic Evolution for Low-light Image Enhancement
Tong Li
Lizhi Wang
Hansen Feng
Lin Zhu
Hua Huang
DiffM
31
0
0
14 May 2025
Differentiable Channel Selection in Self-Attention For Person Re-Identification
Differentiable Channel Selection in Self-Attention For Person Re-Identification
Yancheng Wang
Nebojsa Jojic
Yingzhen Yang
34
0
0
13 May 2025
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Ibne Farabi Shihab
Sanjeda Akter
Anuj Sharma
Mamba
54
0
0
13 May 2025
Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry
Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry
Mohammed Adnan
Rohan Jain
Ekansh Sharma
Rahul Krishnan
Yani Andrew Ioannou
64
0
0
08 May 2025
Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators
Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators
Steven Jorgensen
Erik Hemberg
J. Toutouh
Una-May O’Reilly
59
0
0
08 May 2025
EntroLLM: Entropy Encoded Weight Compression for Efficient Large Language Model Inference on Edge Devices
EntroLLM: Entropy Encoded Weight Compression for Efficient Large Language Model Inference on Edge Devices
Arnab Sanyal
Prithwish Mukherjee
Gourav Datta
Sandeep P. Chinchali
MQ
234
0
0
05 May 2025
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Sanjay Surendranath Girija
Shashank Kapoor
Lakshit Arora
Dipen Pradhan
Aman Raj
Ankit Shetgaonkar
57
0
0
05 May 2025
ReplaceMe: Network Simplification via Layer Pruning and Linear Transformations
ReplaceMe: Network Simplification via Layer Pruning and Linear Transformations
Dmitriy Shopkhoev
Ammar Ali
Magauiya Zhussip
Valentin Malykh
Stamatios Lefkimmiatis
N. Komodakis
Sergey Zagoruyko
VLM
244
0
0
05 May 2025
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Chuan Sun
Han Yu
Lizhen Cui
Xiaoxiao Li
208
0
0
03 May 2025
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
65
0
0
29 Apr 2025
3DPyranet Features Fusion for Spatio-temporal Feature Learning
3DPyranet Features Fusion for Spatio-temporal Feature Learning
I. Ullah
A. Petrosino
3DPC
230
0
0
26 Apr 2025
Event-Based Eye Tracking. 2025 Event-based Vision Workshop
Event-Based Eye Tracking. 2025 Event-based Vision Workshop
Qinyu Chen
Chang Gao
Min Liu
Daniele Perrone
Yan Ru Pei
...
Hoang M. Truong
Vinh-Thuan Ly
Huy G. Tran
Thuan-Phat Nguyen
Tram T. Doan
46
1
0
25 Apr 2025
The Rise of Small Language Models in Healthcare: A Comprehensive Survey
The Rise of Small Language Models in Healthcare: A Comprehensive Survey
Muskan Garg
Shaina Raza
Shebuti Rayana
Xingyi Liu
Sunghwan Sohn
LM&MA
AILaw
95
0
0
23 Apr 2025
BackSlash: Rate Constrained Optimized Training of Large Language Models
BackSlash: Rate Constrained Optimized Training of Large Language Models
Jun Wu
Jiangtao Wen
Yuxing Han
39
0
0
23 Apr 2025
Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models
Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models
Yoojin Jung
Byung Cheol Song
AAML
VLM
MQ
46
0
0
07 Apr 2025
Towards Understanding and Improving Refusal in Compressed Models via Mechanistic Interpretability
Towards Understanding and Improving Refusal in Compressed Models via Mechanistic Interpretability
Vishnu Kabir Chhabra
Mohammad Mahdi Khalili
AI4CE
36
0
0
05 Apr 2025
RANa: Retrieval-Augmented Navigation
RANa: Retrieval-Augmented Navigation
G. Monaci
Rafael Sampaio de Rezende
Romain Deffayet
G. Csurka
G. Bono
Hervé Déjean
Stéphane Clinchant
Christian Wolf
41
0
0
04 Apr 2025
Online Difficulty Filtering for Reasoning Oriented Reinforcement Learning
Online Difficulty Filtering for Reasoning Oriented Reinforcement Learning
Sanghwan Bae
Jiwoo Hong
Min Young Lee
Hanbyul Kim
Jeongyeon Nam
Donghyun Kwak
OffRL
LRM
58
4
0
04 Apr 2025
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Yupei Li
M. Milling
Björn Schuller
AI4CE
107
0
0
27 Mar 2025
Are formal and functional linguistic mechanisms dissociated in language models?
Are formal and functional linguistic mechanisms dissociated in language models?
Michael Hanna
Sandro Pezzelle
Yonatan Belinkov
57
0
0
14 Mar 2025
Poly-MgNet: Polynomial Building Blocks in Multigrid-Inspired ResNets
Antonia van Betteray
Matthias Rottmann
Karsten Kahl
51
0
0
13 Mar 2025
Wanda++: Pruning Large Language Models via Regional Gradients
Wanda++: Pruning Large Language Models via Regional Gradients
Yifan Yang
Kai Zhen
Bhavana Ganesh
Aram Galstyan
Goeric Huybrechts
...
S. Bodapati
Nathan Susanj
Zheng Zhang
Jack FitzGerald
Abhishek Kumar
69
0
0
06 Mar 2025
MoSE: Hierarchical Self-Distillation Enhances Early Layer Embeddings
MoSE: Hierarchical Self-Distillation Enhances Early Layer Embeddings
Andrea Gurioli
Federico Pennino
João Monteiro
Maurizio Gabbrielli
51
0
0
04 Mar 2025
FoCTTA: Low-Memory Continual Test-Time Adaptation with Focus
FoCTTA: Low-Memory Continual Test-Time Adaptation with Focus
Youbing Hu
Yun Cheng
Zimu Zhou
Anqi Lu
Zhiqiang Cao
Zhijun Li
TTA
66
0
0
28 Feb 2025
Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing
Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing
Qi Le
Enmao Diao
Ziyan Wang
Xinran Wang
Jie Ding
Li Yang
Ali Anwar
86
2
0
24 Feb 2025
CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks
CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks
Yulong Huang
Xiaopeng Lin
Hongwei Ren
Haotian Fu
Yue Zhou
Zunchang Liu
Biao Pan
Bojun Cheng
62
10
0
20 Feb 2025
GPU Memory Usage Optimization for Backward Propagation in Deep Network Training
GPU Memory Usage Optimization for Backward Propagation in Deep Network Training
Ding-Yong Hong
Tzu-Hsien Tsai
Ning Wang
Pangfeng Liu
Jan-Jan Wu
44
0
0
18 Feb 2025
An Efficient Row-Based Sparse Fine-Tuning
An Efficient Row-Based Sparse Fine-Tuning
Cen-Jhih Li
Aditya Bhaskara
62
0
0
17 Feb 2025
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Dong Wang
Haris Šikić
Lothar Thiele
O. Saukh
68
0
0
17 Feb 2025
Low-Rank Compression for IMC Arrays
Low-Rank Compression for IMC Arrays
Kang Eun Jeon
Johnny Rhe
J. Ko
45
0
0
10 Feb 2025
Identify Critical KV Cache in LLM Inference from an Output Perturbation Perspective
Identify Critical KV Cache in LLM Inference from an Output Perturbation Perspective
Yuan Feng
Junlin Lv
Yuhang Cao
Xike Xie
S.Kevin Zhou
84
2
0
06 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
120
0
0
04 Feb 2025
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Yingtao Zhang
Jialin Zhao
Wenjing Wu
Ziheng Liao
Umberto Michieli
C. Cannistraci
63
0
0
31 Jan 2025
Symmetric Pruning of Large Language Models
Symmetric Pruning of Large Language Models
Kai Yi
Peter Richtárik
AAML
VLM
78
0
0
31 Jan 2025
SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
Mohammad Mozaffari
Amir Yazdanbakhsh
Zhao Zhang
M. Dehnavi
93
5
0
28 Jan 2025
Information Consistent Pruning: How to Efficiently Search for Sparse Networks?
Soheil Gharatappeh
Salimeh Yasaei Sekeh
66
0
0
28 Jan 2025
Implicit Bias in Matrix Factorization and its Explicit Realization in a New Architecture
Yikun Hou
Suvrit Sra
A. Yurtsever
36
0
0
28 Jan 2025
FIT-Print: Towards False-claim-resistant Model Ownership Verification via Targeted Fingerprint
Shuo Shao
Haozhe Zhu
Hongwei Yao
Yiming Li
Tianwei Zhang
Zhan Qin
Kui Ren
251
0
0
28 Jan 2025
Meta-Sparsity: Learning Optimal Sparse Structures in Multi-task Networks through Meta-learning
Meta-Sparsity: Learning Optimal Sparse Structures in Multi-task Networks through Meta-learning
Richa Upadhyay
Ronald Phlypo
Rajkumar Saini
Marcus Liwicki
47
0
0
21 Jan 2025
Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks
Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks
Giulia Fracastoro
Sophie M. Fosson
Andrea Migliorati
G. Calafiore
50
1
0
19 Jan 2025
MOGNET: A Mux-residual quantized Network leveraging Online-Generated weights
MOGNET: A Mux-residual quantized Network leveraging Online-Generated weights
Van Thien Nguyen
William Guicquero
Gilles Sicard
MQ
75
1
0
17 Jan 2025
Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic
Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic
Yifei He
Yuzheng Hu
Yong Lin
Tong Zhang
Han Zhao
FedML
MoMe
73
19
0
08 Jan 2025
PTEENet: Post-Trained Early-Exit Neural Networks Augmentation for Inference Cost Optimization
PTEENet: Post-Trained Early-Exit Neural Networks Augmentation for Inference Cost Optimization
Assaf Lahiany
Yehudit Aperstein
38
4
0
07 Jan 2025
GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference
GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference
Chao Zeng
Songwei Liu
Shu Yang
Fangmin Chen
Xing Mei
Lean Fu
MQ
54
0
0
23 Dec 2024
Training MLPs on Graphs without Supervision
Training MLPs on Graphs without Supervision
Zehong Wang
Zheyuan Zhang
Chuxu Zhang
Yanfang Ye
86
5
0
05 Dec 2024
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning
Andy Li
A. Durrant
Milan Markovic
Lu Yin
Georgios Leontidis
Tianlong Chen
Lu Yin
Georgios Leontidis
85
0
0
20 Nov 2024
1234...212223
Next