ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.04199
  4. Cited By
Fast On-the-fly Retraining-free Sparsification of Convolutional Neural
  Networks

Fast On-the-fly Retraining-free Sparsification of Convolutional Neural Networks

10 November 2018
Amir H. Ashouri
T. Abdelrahman
Alwyn Dos Remedios
    MQ
ArXivPDFHTML

Papers citing "Fast On-the-fly Retraining-free Sparsification of Convolutional Neural Networks"

10 / 10 papers shown
Title
A Survey on Compiler Autotuning using Machine Learning
A Survey on Compiler Autotuning using Machine Learning
Amir H. Ashouri
W. Killian
John Cavazos
G. Palermo
Cristina Silvano
57
200
0
13 Jan 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
136
3,111
0
15 Dec 2017
Exploring the Regularity of Sparse Structure in Convolutional Neural
  Networks
Exploring the Regularity of Sparse Structure in Convolutional Neural Networks
Huizi Mao
Song Han
Jeff Pool
Wenshuo Li
Xingyu Liu
Yu Wang
W. Dally
90
243
0
24 May 2017
A Regularized Framework for Sparse and Structured Neural Attention
A Regularized Framework for Sparse and Structured Neural Attention
Vlad Niculae
Mathieu Blondel
78
100
0
22 May 2017
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Vivienne Sze
Yu-hsin Chen
Tien-Ju Yang
J. Emer
AAML
3DV
113
3,013
0
27 Mar 2017
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
160
2,337
0
12 Aug 2016
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  model size
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
F. Iandola
Song Han
Matthew W. Moskewicz
Khalid Ashraf
W. Dally
Kurt Keutzer
139
7,465
0
24 Feb 2016
Sparsifying Neural Network Connections for Face Recognition
Sparsifying Neural Network Connections for Face Recognition
Yi Sun
Xiaogang Wang
Xiaoou Tang
3DH
CVBM
59
141
0
07 Dec 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
237
8,821
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
292
6,660
0
08 Jun 2015
1