ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.03193
  4. Cited By
Network Pruning That Matters: A Case Study on Retraining Variants

Network Pruning That Matters: A Case Study on Retraining Variants

7 May 2021
Duong H. Le
Binh-Son Hua
ArXivPDFHTML

Papers citing "Network Pruning That Matters: A Case Study on Retraining Variants"

11 / 11 papers shown
Title
Straightforward Layer-wise Pruning for More Efficient Visual Adaptation
Straightforward Layer-wise Pruning for More Efficient Visual Adaptation
Ruizi Han
Jinglei Tang
58
1
0
19 Jul 2024
Iterative Filter Pruning for Concatenation-based CNN Architectures
Iterative Filter Pruning for Concatenation-based CNN Architectures
Svetlana Pavlitska
Oliver Bagge
Federico Nicolás Peccia
Toghrul Mammadov
J. Marius Zöllner
VLM
3DPC
40
2
0
04 May 2024
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
52
10
0
23 Dec 2023
Unlearning with Fisher Masking
Unlearning with Fisher Masking
Yufang Liu
Changzhi Sun
Yuanbin Wu
Aimin Zhou
MU
23
5
0
09 Oct 2023
Magnitude Attention-based Dynamic Pruning
Magnitude Attention-based Dynamic Pruning
Jihye Back
Namhyuk Ahn
Jang-Hyun Kim
32
2
0
08 Jun 2023
Network Pruning Spaces
Network Pruning Spaces
Xuanyu He
Yu-I Yang
Ran Song
Jiachen Pu
Conggang Hu
Feijun Jiang
Wei Zhang
Huanghao Ding
3DPC
35
0
0
19 Apr 2023
Why is the State of Neural Network Pruning so Confusing? On the
  Fairness, Comparison Setup, and Trainability in Network Pruning
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
Huan Wang
Can Qin
Yue Bai
Yun Fu
37
20
0
12 Jan 2023
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
79
27
0
17 Jun 2022
Compression-aware Training of Neural Networks using Frank-Wolfe
Compression-aware Training of Neural Networks using Frank-Wolfe
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
29
9
0
24 May 2022
1xN Pattern for Pruning Convolutional Neural Networks
1xN Pattern for Pruning Convolutional Neural Networks
Mingbao Lin
Yu-xin Zhang
Yuchao Li
Bohong Chen
Rongrong Ji
Mengdi Wang
Shen Li
Yonghong Tian
Rongrong Ji
3DPC
33
40
0
31 May 2021
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
229
383
0
05 Mar 2020
1