ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.03014
  4. Cited By
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

4 March 2021
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
    AAML
ArXivPDFHTML

Papers citing "Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy"

49 / 49 papers shown
Title
MicroNAS: An Automated Framework for Developing a Fall Detection System
MicroNAS: An Automated Framework for Developing a Fall Detection System
Seyed Mojtaba Mohasel
John Sheppard
Lindsey K. Molina
Richard R. Neptune
Shane R. Wurdeman
Corey A. Pew
27
1
0
10 Apr 2025
Complexity-Aware Training of Deep Neural Networks for Optimal Structure
  Discovery
Complexity-Aware Training of Deep Neural Networks for Optimal Structure Discovery
Valentin Frank Ingmar Guenter
Athanasios Sideris
CVBM
18
0
0
14 Nov 2024
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
D. Mocanu
Elena Mocanu
OOD
3DH
52
0
0
03 Oct 2024
Compress and Compare: Interactively Evaluating Efficiency and Behavior
  Across ML Model Compression Experiments
Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments
Angie Boggust
Venkatesh Sivaraman
Yannick Assogba
Donghao Ren
Dominik Moritz
Fred Hohman
VLM
52
3
0
06 Aug 2024
On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion
On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion
Chenghao Fan
Zhenyi Lu
Wei Wei
Jie Tian
Xiaoye Qu
Dangyang Chen
Yu Cheng
MoMe
48
5
0
17 Jun 2024
Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging
Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging
Zhenyi Lu
Chenghao Fan
Wei Wei
Xiaoye Qu
Dangyang Chen
Yu Cheng
MoMe
42
48
0
17 Jun 2024
Concurrent Training and Layer Pruning of Deep Neural Networks
Concurrent Training and Layer Pruning of Deep Neural Networks
Valentin Frank Ingmar Guenter
Athanasios Sideris
3DPC
32
3
0
06 Jun 2024
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural
  Networks Using the Marginal Likelihood
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood
Rayen Dhahri
Alexander Immer
Bertrand Charpentier
Stephan Günnemann
Vincent Fortuin
BDL
UQCV
24
4
0
25 Feb 2024
Dependable Distributed Training of Compressed Machine Learning Models
Dependable Distributed Training of Compressed Machine Learning Models
F. Malandrino
G. Giacomo
Marco Levorato
C. Chiasserini
29
0
0
22 Feb 2024
Understanding the Effect of Model Compression on Social Bias in Large
  Language Models
Understanding the Effect of Model Compression on Social Bias in Large Language Models
Gustavo Gonçalves
Emma Strubell
18
9
0
09 Dec 2023
Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural
  Networks
Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks
Giorgio Piras
Maura Pintor
Ambra Demontis
Battista Biggio
AAML
15
1
0
12 Oct 2023
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates
  before In-Context Learning
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates before In-Context Learning
Tian Jin
Nolan Clement
Xin Dong
Vaishnavh Nagarajan
Michael Carbin
Jonathan Ragan-Kelley
Gintare Karolina Dziugaite
LRM
46
5
0
07 Oct 2023
Model Compression in Practice: Lessons Learned from Practitioners
  Creating On-device Machine Learning Experiences
Model Compression in Practice: Lessons Learned from Practitioners Creating On-device Machine Learning Experiences
Fred Hohman
Mary Beth Kery
Donghao Ren
Dominik Moritz
19
16
0
06 Oct 2023
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Denis Kuznedelev
Eldar Kurtic
Eugenia Iofinova
Elias Frantar
Alexandra Peste
Dan Alistarh
VLM
30
11
0
03 Aug 2023
Sparse then Prune: Toward Efficient Vision Transformers
Sparse then Prune: Toward Efficient Vision Transformers
Yogi Prasetyo
N. Yudistira
A. Widodo
VLM
ViT
16
1
0
22 Jul 2023
Adaptive Sharpness-Aware Pruning for Robust Sparse Networks
Adaptive Sharpness-Aware Pruning for Robust Sparse Networks
Anna Bair
Hongxu Yin
Maying Shen
Pavlo Molchanov
J. Álvarez
35
10
0
25 Jun 2023
CFDP: Common Frequency Domain Pruning
CFDP: Common Frequency Domain Pruning
Samir Khaki
Weihan Luo
3DV
33
5
0
07 Jun 2023
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures
Eugenia Iofinova
Alexandra Peste
Dan Alistarh
23
9
0
25 Apr 2023
Progressive Ensemble Distillation: Building Ensembles for Efficient
  Inference
Progressive Ensemble Distillation: Building Ensembles for Efficient Inference
D. Dennis
Abhishek Shetty
A. Sevekari
K. Koishida
Virginia Smith
FedML
29
0
0
20 Feb 2023
The Power of External Memory in Increasing Predictive Model Capacity
The Power of External Memory in Increasing Predictive Model Capacity
Cenk Baykal
D. Cutler
Nishanth Dikkala
Nikhil Ghosh
Rina Panigrahy
Xin Wang
KELM
18
0
0
31 Jan 2023
Alternating Updates for Efficient Transformers
Alternating Updates for Efficient Transformers
Cenk Baykal
D. Cutler
Nishanth Dikkala
Nikhil Ghosh
Rina Panigrahy
Xin Wang
MoE
40
5
0
30 Jan 2023
Getting Away with More Network Pruning: From Sparsity to Geometry and
  Linear Regions
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
37
7
0
19 Jan 2023
Bridging Fairness and Environmental Sustainability in Natural Language
  Processing
Bridging Fairness and Environmental Sustainability in Natural Language Processing
Marius Hessenthaler
Emma Strubell
Dirk Hovy
Anne Lauscher
16
8
0
08 Nov 2022
Toward domain generalized pruning by scoring out-of-distribution
  importance
Toward domain generalized pruning by scoring out-of-distribution importance
Rizhao Cai
Haoliang Li
Alex C. Kot
14
0
0
25 Oct 2022
Pruning's Effect on Generalization Through the Lens of Training and
  Regularization
Pruning's Effect on Generalization Through the Lens of Training and Regularization
Tian Jin
Michael Carbin
Daniel M. Roy
Jonathan Frankle
Gintare Karolina Dziugaite
29
27
0
25 Oct 2022
OLLA: Optimizing the Lifetime and Location of Arrays to Reduce the
  Memory Usage of Neural Networks
OLLA: Optimizing the Lifetime and Location of Arrays to Reduce the Memory Usage of Neural Networks
Benoit Steiner
Mostafa Elhoushi
Jacob Kahn
James Hegarty
29
8
0
24 Oct 2022
Interpreting Neural Policies with Disentangled Tree Representations
Interpreting Neural Policies with Disentangled Tree Representations
Tsun-Hsuan Wang
Wei Xiao
Tim Seyde
Ramin Hasani
Daniela Rus
DRL
21
2
0
13 Oct 2022
On the Robustness and Anomaly Detection of Sparse Neural Networks
On the Robustness and Anomaly Detection of Sparse Neural Networks
Morgane Ayle
Bertrand Charpentier
John Rachwan
Daniel Zügner
Simon Geisler
Stephan Günnemann
AAML
52
3
0
09 Jul 2022
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
40
9
0
08 Jul 2022
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
69
27
0
17 Jun 2022
"Understanding Robustness Lottery": A Geometric Visual Comparative
  Analysis of Neural Network Pruning Approaches
"Understanding Robustness Lottery": A Geometric Visual Comparative Analysis of Neural Network Pruning Approaches
Zhimin Li
Shusen Liu
Xin Yu
Kailkhura Bhavya
Jie Cao
Diffenderfer James Daniel
P. Bremer
Valerio Pascucci
AAML
29
1
0
16 Jun 2022
Recall Distortion in Neural Network Pruning and the Undecayed Pruning
  Algorithm
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
Aidan Good
Jia-Huei Lin
Hannah Sieg
Mikey Ferguson
Xin Yu
Shandian Zhe
J. Wieczorek
Thiago Serra
29
11
0
07 Jun 2022
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More
  Compressible Models
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
Clara Na
Sanket Vaibhav Mehta
Emma Strubell
62
19
0
25 May 2022
Robust Learning of Parsimonious Deep Neural Networks
Robust Learning of Parsimonious Deep Neural Networks
Valentin Frank Ingmar Guenter
Athanasios Sideris
17
2
0
10 May 2022
A Tale of Two Models: Constructing Evasive Attacks on Edge Models
A Tale of Two Models: Constructing Evasive Attacks on Edge Models
Wei Hao
Aahil Awatramani
Jia-Bin Hu
Chengzhi Mao
Pin-Chun Chen
Eyal Cidon
Asaf Cidon
Junfeng Yang
AAML
22
4
0
22 Apr 2022
Supervised Robustness-preserving Data-free Neural Network Pruning
Supervised Robustness-preserving Data-free Neural Network Pruning
Mark Huasong Meng
Guangdong Bai
Sin Gee Teo
J. Dong
AAML
13
4
0
02 Apr 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
36
48
0
09 Mar 2022
How Well Do Sparse Imagenet Models Transfer?
How Well Do Sparse Imagenet Models Transfer?
Eugenia Iofinova
Alexandra Peste
Mark Kurtz
Dan Alistarh
19
38
0
26 Nov 2021
Fire Together Wire Together: A Dynamic Pruning Approach with
  Self-Supervised Mask Prediction
Fire Together Wire Together: A Dynamic Pruning Approach with Self-Supervised Mask Prediction
Sara Elkerdawy
Mostafa Elhoushi
Hong Zhang
Nilanjan Ray
CVBM
18
37
0
15 Oct 2021
The Low-Resource Double Bind: An Empirical Study of Pruning for
  Low-Resource Machine Translation
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation
Orevaoghene Ahia
Julia Kreutzer
Sara Hooker
112
51
0
06 Oct 2021
Model Preserving Compression for Neural Networks
Model Preserving Compression for Neural Networks
Jerry Chee
Megan Flynn
Anil Damle
Chris De Sa
11
7
0
30 Jul 2021
Compressing Neural Networks: Towards Determining the Optimal Layer-wise
  Decomposition
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
Lucas Liebenwein
Alaa Maalouf
O. Gal
Dan Feldman
Daniela Rus
27
46
0
23 Jul 2021
Sparse Flows: Pruning Continuous-depth Models
Sparse Flows: Pruning Continuous-depth Models
Lucas Liebenwein
Ramin Hasani
Alexander Amini
Daniela Rus
20
16
0
24 Jun 2021
A Winning Hand: Compressing Deep Networks Can Improve
  Out-Of-Distribution Robustness
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
James Diffenderfer
Brian Bartoldson
Shreya Chaganti
Jize Zhang
B. Kailkhura
OOD
26
69
0
16 Jun 2021
Pruning and Slicing Neural Networks using Formal Verification
Pruning and Slicing Neural Networks using Formal Verification
O. Lahav
Guy Katz
13
20
0
28 May 2021
Post-Training Sparsity-Aware Quantization
Post-Training Sparsity-Aware Quantization
Gil Shomron
F. Gabbay
Samer Kurzum
U. Weiser
MQ
33
33
0
23 May 2021
Scaling Up Exact Neural Network Compression by ReLU Stability
Scaling Up Exact Neural Network Compression by ReLU Stability
Thiago Serra
Xin Yu
Abhinav Kumar
Srikumar Ramalingam
16
24
0
15 Feb 2021
Robustness in Compressed Neural Networks for Object Detection
Robustness in Compressed Neural Networks for Object Detection
Sebastian Cygert
A. Czyżewski
21
8
0
10 Feb 2021
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
188
1,027
0
06 Mar 2020
1