Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.05604
Cited By
A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation
11 June 2022
Wenjing Yang
G. Wang
Jie Ding
Yuhong Yang
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation"
4 / 4 papers shown
Title
Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing
Qi Le
Enmao Diao
Ziyan Wang
Xinran Wang
Jie Ding
Li Yang
Ali Anwar
77
2
0
24 Feb 2025
Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization
Geng Li
G. Wang
Jie Ding
34
3
0
07 May 2023
Pruning Deep Neural Networks from a Sparsity Perspective
Enmao Diao
G. Wang
Jiawei Zhan
Yuhong Yang
Jie Ding
Vahid Tarokh
27
30
0
11 Feb 2023
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
141
684
0
31 Jan 2021
1