ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1306.0543
  4. Cited By
Predicting Parameters in Deep Learning
v1v2 (latest)

Predicting Parameters in Deep Learning

3 June 2013
Misha Denil
B. Shakibi
Laurent Dinh
MarcÁurelio Ranzato
Nando de Freitas
    OOD
ArXiv (abs)PDFHTML

Papers citing "Predicting Parameters in Deep Learning"

50 / 392 papers shown
Title
ProjectionNet: Learning Efficient On-Device Deep Networks Using Neural
  Projections
ProjectionNet: Learning Efficient On-Device Deep Networks Using Neural Projections
Sujith Ravi
57
62
0
02 Aug 2017
Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM
Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM
Cong Leng
Hao Li
Shenghuo Zhu
Rong Jin
MQ
72
288
0
24 Jul 2017
Neuron Pruning for Compressing Deep Networks using Maxout Architectures
Neuron Pruning for Compressing Deep Networks using Maxout Architectures
Fernando Moya Rueda
René Grzeszick
G. Fink
CVBM
66
17
0
21 Jul 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
63
1,764
0
20 Jul 2017
Model compression as constrained optimization, with application to
  neural nets. Part II: quantization
Model compression as constrained optimization, with application to neural nets. Part II: quantization
M. A. Carreira-Perpiñán
Yerlan Idelbayev
MQ
72
37
0
13 Jul 2017
Stochastic, Distributed and Federated Optimization for Machine Learning
Stochastic, Distributed and Federated Optimization for Machine Learning
Jakub Konecný
FedML
83
38
0
04 Jul 2017
An Entropy-based Pruning Method for CNN Compression
An Entropy-based Pruning Method for CNN Compression
Jian-Hao Luo
Jianxin Wu
48
180
0
19 Jun 2017
Sparse Neural Networks Topologies
Sparse Neural Networks Topologies
Alfred Bourely
John Patrick Boueri
Krzysztof Choromonski
GNN
45
11
0
18 Jun 2017
Enriched Deep Recurrent Visual Attention Model for Multiple Object
  Recognition
Enriched Deep Recurrent Visual Attention Model for Multiple Object Recognition
Artsiom Ablavatski
Shijian Lu
Jianfei Cai
51
37
0
12 Jun 2017
Network Sketching: Exploiting Binary Structure in Deep CNNs
Network Sketching: Exploiting Binary Structure in Deep CNNs
Yiwen Guo
Anbang Yao
Hao Zhao
Yurong Chen
MQ
81
95
0
07 Jun 2017
IDK Cascades: Fast Deep Learning by Learning not to Overthink
IDK Cascades: Fast Deep Learning by Learning not to Overthink
Xin Wang
Yujia Luo
D. Crankshaw
Alexey Tumanov
Fisher Yu
Joseph E. Gonzalez
85
108
0
03 Jun 2017
Kronecker Recurrent Units
Kronecker Recurrent Units
C. Jose
Moustapha Cissé
François Fleuret
ODL
145
46
0
29 May 2017
Bayesian Compression for Deep Learning
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCVBDL
207
481
0
24 May 2017
Compressing Recurrent Neural Network with Tensor Train
Compressing Recurrent Neural Network with Tensor Train
Andros Tjandra
S. Sakti
Satoshi Nakamura
95
111
0
23 May 2017
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain
  Surgeon
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
Xin Luna Dong
Shangyu Chen
Sinno Jialin Pan
205
513
0
22 May 2017
Exploring Sparsity in Recurrent Neural Networks
Exploring Sparsity in Recurrent Neural Networks
Sharan Narang
Erich Elsen
G. Diamos
Shubho Sengupta
82
313
0
17 Apr 2017
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
DyVEDeep: Dynamic Variable Effort Deep Neural Networks
Sanjay Ganapathy
Swagath Venkataramani
Balaraman Ravindran
A. Raghunathan
51
8
0
04 Apr 2017
Factorization tricks for LSTM networks
Factorization tricks for LSTM networks
Oleksii Kuchaiev
Boris Ginsburg
104
113
0
31 Mar 2017
Towards thinner convolutional neural networks through Gradually Global
  Pruning
Towards thinner convolutional neural networks through Gradually Global Pruning
Z. Wang
Ce Zhu
Zhiqiang Xia
Qi Guo
Yipeng Liu
CVBM
34
4
0
29 Mar 2017
Coordinating Filters for Faster Deep Neural Networks
Coordinating Filters for Faster Deep Neural Networks
W. Wen
Cong Xu
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
66
138
0
28 Mar 2017
The Power of Sparsity in Convolutional Neural Networks
The Power of Sparsity in Convolutional Neural Networks
Soravit Changpinyo
Mark Sandler
A. Zhmoginov
89
133
0
21 Feb 2017
Soft Weight-Sharing for Neural Network Compression
Soft Weight-Sharing for Neural Network Compression
Karen Ullrich
Edward Meeds
Max Welling
178
421
0
13 Feb 2017
Deep Learning with Low Precision by Half-wave Gaussian Quantization
Deep Learning with Low Precision by Half-wave Gaussian Quantization
Zhaowei Cai
Xiaodong He
Jian Sun
Nuno Vasconcelos
MQ
146
507
0
03 Feb 2017
FastText.zip: Compressing text classification models
FastText.zip: Compressing text classification models
Armand Joulin
Edouard Grave
Piotr Bojanowski
Matthijs Douze
Hervé Jégou
Tomas Mikolov
MQ
118
1,216
0
12 Dec 2016
Towards the Limit of Network Quantization
Towards the Limit of Network Quantization
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
88
195
0
05 Dec 2016
Diet Networks: Thin Parameters for Fat Genomics
Diet Networks: Thin Parameters for Fat Genomics
Adriana Romero
P. Carrier
Akram Erraqabi
Tristan Sylvain
Alex Auvolat
Etienne Dejoie
Marc-André Legault
M. Dubé
J. Hussin
Yoshua Bengio
84
68
0
28 Nov 2016
Generalized Dropout
Generalized Dropout
Suraj Srinivas
R. Venkatesh Babu
BDL
63
48
0
21 Nov 2016
Training Sparse Neural Networks
Training Sparse Neural Networks
Suraj Srinivas
Akshayvarun Subramanya
R. Venkatesh Babu
165
208
0
21 Nov 2016
LCNN: Lookup-based Convolutional Neural Network
LCNN: Lookup-based Convolutional Neural Network
Hessam Bagherinezhad
Mohammad Rastegari
Ali Farhadi
77
90
0
20 Nov 2016
Learning the Number of Neurons in Deep Networks
Learning the Number of Neurons in Deep Networks
J. Álvarez
Mathieu Salzmann
210
414
0
19 Nov 2016
Ultimate tensorization: compressing convolutional and FC layers alike
Ultimate tensorization: compressing convolutional and FC layers alike
T. Garipov
D. Podoprikhin
Alexander Novikov
Dmitry Vetrov
83
191
0
10 Nov 2016
Fixed-point Factorized Networks
Fixed-point Factorized Networks
Peisong Wang
Jian Cheng
MQ
79
43
0
07 Nov 2016
Deep Model Compression: Distilling Knowledge from Noisy Teachers
Deep Model Compression: Distilling Knowledge from Noisy Teachers
Bharat Bhusan Sau
V. Balasubramanian
102
182
0
30 Oct 2016
Structured adaptive and random spinners for fast machine learning
  computations
Structured adaptive and random spinners for fast machine learning computations
Mariusz Bojarski
A. Choromańska
K. Choromanski
Francois Fagan
Cédric Gouy-Pailler
Anne Morvan
Nourhan Sakr
Tamás Sarlós
Jamal Atif
130
35
0
19 Oct 2016
Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
314
4,681
0
18 Oct 2016
Random Feature Expansions for Deep Gaussian Processes
Random Feature Expansions for Deep Gaussian Processes
Kurt Cutajar
Edwin V. Bonilla
Pietro Michiardi
Maurizio Filippone
BDL
68
144
0
14 Oct 2016
HyperNetworks
HyperNetworks
David R Ha
Andrew M. Dai
Quoc V. Le
182
1,637
0
27 Sep 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
299
3,711
0
31 Aug 2016
Local Binary Convolutional Neural Networks
Local Binary Convolutional Neural Networks
Felix Juefei Xu
Vishnu Boddeti
Marios Savvides
MQ
84
253
0
22 Aug 2016
Dynamic Network Surgery for Efficient DNNs
Dynamic Network Surgery for Efficient DNNs
Yiwen Guo
Anbang Yao
Yurong Chen
113
1,063
0
16 Aug 2016
About Pyramid Structure in Convolutional Neural Networks
About Pyramid Structure in Convolutional Neural Networks
I. Ullah
A. Petrosino
3DV
85
30
0
14 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
263
2,348
0
12 Aug 2016
Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Jongsoo Park
Sheng Li
W. Wen
P. T. P. Tang
Hai Helen Li
Yiran Chen
Pradeep Dubey
149
184
0
04 Aug 2016
Network Trimming: A Data-Driven Neuron Pruning Approach towards
  Efficient Deep Architectures
Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
Hengyuan Hu
Rui Peng
Yu-Wing Tai
Chi-Keung Tang
80
892
0
12 Jul 2016
Group Sparse Regularization for Deep Neural Networks
Group Sparse Regularization for Deep Neural Networks
Simone Scardapane
Danilo Comminiello
Amir Hussain
A. Uncini
437
468
0
02 Jul 2016
Sequence-Level Knowledge Distillation
Sequence-Level Knowledge Distillation
Yoon Kim
Alexander M. Rush
138
1,123
0
25 Jun 2016
DecomposeMe: Simplifying ConvNets for End-to-End Learning
DecomposeMe: Simplifying ConvNets for End-to-End Learning
J. Álvarez
L. Petersson
78
48
0
17 Jun 2016
Learning feed-forward one-shot learners
Learning feed-forward one-shot learners
Luca Bertinetto
João F. Henriques
Jack Valmadre
Philip Torr
Andrea Vedaldi
87
471
0
16 Jun 2016
Convolution by Evolution: Differentiable Pattern Producing Networks
Convolution by Evolution: Differentiable Pattern Producing Networks
Chrisantha Fernando
Dylan Banarse
Malcolm Reynolds
F. Besse
David Pfau
Max Jaderberg
Marc Lanctot
Daan Wierstra
296
102
0
08 Jun 2016
Ensemble-Compression: A New Method for Parallel Training of Deep Neural
  Networks
Ensemble-Compression: A New Method for Parallel Training of Deep Neural Networks
Shizhao Sun
Wei Chen
Jiang Bian
Xiaoguang Liu
Tie-Yan Liu
FedML
53
29
0
02 Jun 2016
Previous
12345678
Next