ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.00120
  4. Cited By
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum
  Evaluation

One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation

30 November 2019
Matthew Shunshi Zhang
Bradly C. Stadie
ArXivPDFHTML

Papers citing "One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation"

8 / 8 papers shown
Title
Graph Expansion in Pruned Recurrent Neural Network Layers Preserve
  Performance
Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance
Suryam Arnav Kalra
Arindam Biswas
Pabitra Mitra
Biswajit Basu
GNN
46
0
0
17 Mar 2024
Learning a Consensus Sub-Network with Polarization Regularization and One Pass Training
Learning a Consensus Sub-Network with Polarization Regularization and One Pass Training
Xiaoying Zhi
Varun Babbar
P. Sun
Fran Silavong
Ruibo Shi
Sean J. Moran
Sean Moran
57
1
0
17 Feb 2023
One-shot Network Pruning at Initialization with Discriminative Image
  Patches
One-shot Network Pruning at Initialization with Discriminative Image Patches
Yinan Yang
Yu Wang
Yi Ji
Heng Qi
Jien Kato
VLM
34
4
0
13 Sep 2022
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
Peng Hu
Xi Peng
Erik Cambria
M. Aly
Jie Lin
MQ
52
59
0
23 May 2022
A Survey on Model Compression and Acceleration for Pretrained Language
  Models
A Survey on Model Compression and Acceleration for Pretrained Language Models
Canwen Xu
Julian McAuley
23
58
0
15 Feb 2022
Spectral Pruning for Recurrent Neural Networks
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
27
4
0
23 May 2021
Sparse Training Theory for Scalable and Efficient Agents
Sparse Training Theory for Scalable and Efficient Agents
Decebal Constantin Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
45
17
0
02 Mar 2021
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
30
3
0
19 Jun 2020
1