ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.05604
  4. Cited By
A Theoretical Understanding of Neural Network Compression from Sparse
  Linear Approximation

A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation

11 June 2022
Wenjing Yang
G. Wang
Jie Ding
Yuhong Yang
    MLT
ArXivPDFHTML

Papers citing "A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation"

4 / 4 papers shown
Title
Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing
Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing
Qi Le
Enmao Diao
Ziyan Wang
Xinran Wang
Jie Ding
Li Yang
Ali Anwar
77
2
0
24 Feb 2025
Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO
  Regularization
Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization
Geng Li
G. Wang
Jie Ding
34
3
0
07 May 2023
Pruning Deep Neural Networks from a Sparsity Perspective
Pruning Deep Neural Networks from a Sparsity Perspective
Enmao Diao
G. Wang
Jiawei Zhan
Yuhong Yang
Jie Ding
Vahid Tarokh
27
30
0
11 Feb 2023
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
141
684
0
31 Jan 2021
1