ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1306.0543
  4. Cited By
Predicting Parameters in Deep Learning

Predicting Parameters in Deep Learning

3 June 2013
Misha Denil
B. Shakibi
Laurent Dinh
MarcÁurelio Ranzato
Nando de Freitas
    OOD
ArXivPDFHTML

Papers citing "Predicting Parameters in Deep Learning"

50 / 240 papers shown
Title
Backpropagation Neural Tree
Backpropagation Neural Tree
Varun Ojha
Giuseppe Nicosia
BDL
36
14
0
04 Feb 2022
Auto-Compressing Subset Pruning for Semantic Image Segmentation
Auto-Compressing Subset Pruning for Semantic Image Segmentation
Konstantin Ditschuneit
Johannes Otterbach
21
5
0
26 Jan 2022
Network Compression via Central Filter
Network Compression via Central Filter
Y. Duan
Xiaofang Hu
Yue Zhou
Qiang Liu
Shukai Duan
3DPC
19
1
0
10 Dec 2021
A New Measure of Model Redundancy for Compressed Convolutional Neural
  Networks
A New Measure of Model Redundancy for Compressed Convolutional Neural Networks
Feiqing Huang
Yuefeng Si
Yao Zheng
Guodong Li
39
1
0
09 Dec 2021
Neural Network Quantization for Efficient Inference: A Survey
Neural Network Quantization for Efficient Inference: A Survey
Olivia Weng
MQ
28
23
0
08 Dec 2021
Low-rank Tensor Decomposition for Compression of Convolutional Neural
  Networks Using Funnel Regularization
Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization
Bo-Shiuan Chu
Che-Rung Lee
26
11
0
07 Dec 2021
Gabor filter incorporated CNN for compression
Gabor filter incorporated CNN for compression
Akihiro Imamura
N. Arizumi
CVBM
28
2
0
29 Oct 2021
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based
  Networks
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
Shiyu Liu
Chong Min John Tan
Mehul Motani
CLL
29
4
0
17 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
31
13
0
12 Oct 2021
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning
  via Sparse Networks
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
Ghada Sokar
Decebal Constantin Mocanu
Mykola Pechenizkiy
CLL
35
8
0
11 Oct 2021
Weight Evolution: Improving Deep Neural Networks Training through
  Evolving Inferior Weight Values
Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values
Zhenquan Lin
K. Guo
Xiaofen Xing
Xiangmin Xu
ODL
24
1
0
09 Oct 2021
Convolutional Neural Network Compression through Generalized Kronecker
  Product Decomposition
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition
Marawan Gamal Abdel Hameed
Marzieh S. Tahaei
A. Mosleh
V. Nia
47
25
0
29 Sep 2021
Comfetch: Federated Learning of Large Networks on Constrained Clients
  via Sketching
Comfetch: Federated Learning of Large Networks on Constrained Clients via Sketching
Tahseen Rabbani
Brandon Yushan Feng
Marco Bornstein
Kyle Rui Sang
Yifan Yang
Arjun Rajkumar
A. Varshney
Furong Huang
FedML
61
2
0
17 Sep 2021
On the Compression of Neural Networks Using $\ell_0$-Norm Regularization
  and Weight Pruning
On the Compression of Neural Networks Using ℓ0\ell_0ℓ0​-Norm Regularization and Weight Pruning
F. Oliveira
E. Batista
R. Seara
20
9
0
10 Sep 2021
Learning the hypotheses space from data through a U-curve algorithm
Learning the hypotheses space from data through a U-curve algorithm
Diego Marcondes
Adilson Simonis
Junior Barrera
39
1
0
08 Sep 2021
Greenformers: Improving Computation and Memory Efficiency in Transformer
  Models via Low-Rank Approximation
Greenformers: Improving Computation and Memory Efficiency in Transformer Models via Low-Rank Approximation
Samuel Cahyawijaya
31
12
0
24 Aug 2021
InsPose: Instance-Aware Networks for Single-Stage Multi-Person Pose
  Estimation
InsPose: Instance-Aware Networks for Single-Stage Multi-Person Pose Estimation
Dahu Shi
Xing Wei
Xiaodong Yu
Wenming Tan
Ye Ren
Shiliang Pu
3DH
43
30
0
19 Jul 2021
Training Compact CNNs for Image Classification using Dynamic-coded
  Filter Fusion
Training Compact CNNs for Image Classification using Dynamic-coded Filter Fusion
Mingbao Lin
Bohong Chen
Rongrong Ji
Rongrong Ji
VLM
35
23
0
14 Jul 2021
Learning Gradual Argumentation Frameworks using Genetic Algorithms
Learning Gradual Argumentation Frameworks using Genetic Algorithms
J. Spieler
Nico Potyka
Steffen Staab
AI4CE
36
4
0
25 Jun 2021
Knowledge Distillation via Instance-level Sequence Learning
Knowledge Distillation via Instance-level Sequence Learning
Haoran Zhao
Xin Sun
Junyu Dong
Zihe Dong
Qiong Li
34
23
0
21 Jun 2021
Spectral Pruning for Recurrent Neural Networks
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
27
4
0
23 May 2021
Initialization and Regularization of Factorized Neural Layers
Initialization and Regularization of Factorized Neural Layers
M. Khodak
Neil A. Tenenholtz
Lester W. Mackey
Nicolò Fusi
65
56
0
03 May 2021
Piggyback GAN: Efficient Lifelong Learning for Image Conditioned
  Generation
Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation
Mengyao Zhai
Lei Chen
Jiawei He
Megha Nawhal
Frederick Tung
Greg Mori
CLL
38
28
0
24 Apr 2021
Compacting Deep Neural Networks for Internet of Things: Methods and
  Applications
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
21
38
0
20 Mar 2021
Reframing Neural Networks: Deep Structure in Overcomplete
  Representations
Reframing Neural Networks: Deep Structure in Overcomplete Representations
Calvin Murdock
George Cazenavette
Simon Lucey
BDL
41
4
0
10 Mar 2021
Consistent Sparse Deep Learning: Theory and Computation
Consistent Sparse Deep Learning: Theory and Computation
Y. Sun
Qifan Song
F. Liang
BDL
48
27
0
25 Feb 2021
Cross-Layer Distillation with Semantic Calibration
Cross-Layer Distillation with Semantic Calibration
Defang Chen
Jian-Ping Mei
Yuan Zhang
Can Wang
Yan Feng
Chun-Yen Chen
FedML
45
288
0
06 Dec 2020
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
52
116
0
25 Nov 2020
Deep learning insights into cosmological structure formation
Deep learning insights into cosmological structure formation
Luisa Lucie-Smith
H. Peiris
A. Pontzen
Brian D. Nord
Jeyan Thiyagalingam
24
6
0
20 Nov 2020
Improving Neural Network Training in Low Dimensional Random Bases
Improving Neural Network Training in Low Dimensional Random Bases
Frithjof Gressmann
Zach Eaton-Rosen
Carlo Luschi
30
28
0
09 Nov 2020
Permute, Quantize, and Fine-tune: Efficient Compression of Neural
  Networks
Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks
Julieta Martinez
Jashan Shewakramani
Ting Liu
Ioan Andrei Bârsan
Wenyuan Zeng
R. Urtasun
MQ
26
30
0
29 Oct 2020
Anti-Distillation: Improving reproducibility of deep networks
Anti-Distillation: Improving reproducibility of deep networks
G. Shamir
Lorenzo Coviello
46
20
0
19 Oct 2020
Computing Systems for Autonomous Driving: State-of-the-Art and
  Challenges
Computing Systems for Autonomous Driving: State-of-the-Art and Challenges
Liangkai Liu
Sidi Lu
Ren Zhong
Baofu Wu
Yongtao Yao
Qingyan Zhang
Weisong Shi
27
268
0
30 Sep 2020
A Partial Regularization Method for Network Compression
E. Zhenqian
Weiguo Gao
18
0
0
03 Sep 2020
Stable Low-rank Tensor Decomposition for Compression of Convolutional
  Neural Network
Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network
Anh-Huy Phan
Konstantin Sobolev
Konstantin Sozykin
Dmitry Ermilov
Julia Gusak
P. Tichavský
Valeriy Glukhov
Ivan Oseledets
A. Cichocki
BDL
27
129
0
12 Aug 2020
Compression of Deep Learning Models for Text: A Survey
Compression of Deep Learning Models for Text: A Survey
Manish Gupta
Puneet Agrawal
VLM
MedIm
AI4CE
22
115
0
12 Aug 2020
Sparse Linear Networks with a Fixed Butterfly Structure: Theory and
  Practice
Sparse Linear Networks with a Fixed Butterfly Structure: Theory and Practice
Nir Ailon
Omer Leibovitch
Vineet Nair
15
14
0
17 Jul 2020
T-Basis: a Compact Representation for Neural Networks
T-Basis: a Compact Representation for Neural Networks
Anton Obukhov
M. Rakhuba
Stamatios Georgoulis
Menelaos Kanakis
Dengxin Dai
Luc Van Gool
41
27
0
13 Jul 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
64
82
0
02 Jul 2020
Principal Component Networks: Parameter Reduction Early in Training
Principal Component Networks: Parameter Reduction Early in Training
R. Waleffe
Theodoros Rekatsinas
3DPC
19
9
0
23 Jun 2020
Deep Polynomial Neural Networks
Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Jiankang Deng
Yannis Panagakis
S. Zafeiriou
29
92
0
20 Jun 2020
Infinite Feature Selection: A Graph-based Feature Filtering Approach
Infinite Feature Selection: A Graph-based Feature Filtering Approach
Giorgio Roffo
Simone Melzi
U. Castellani
Alessandro Vinciarelli
Marco Cristani
35
107
0
15 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
9
5
0
08 Jun 2020
Premium Access to Convolutional Neural Networks
Premium Access to Convolutional Neural Networks
Julien Bringer
H. Chabanne
Linda Guiga
HAI
6
0
0
22 May 2020
ZynqNet: An FPGA-Accelerated Embedded Convolutional Neural Network
ZynqNet: An FPGA-Accelerated Embedded Convolutional Neural Network
David Gschwend
35
64
0
14 May 2020
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge
  Applications: A Survey
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
3DPC
MedIm
30
52
0
08 May 2020
A Generic Network Compression Framework for Sequential Recommender
  Systems
A Generic Network Compression Framework for Sequential Recommender Systems
Yang Sun
Fajie Yuan
Ming Yang
Guoao Wei
Zhou Zhao
Duo Liu
26
54
0
21 Apr 2020
GANSpace: Discovering Interpretable GAN Controls
GANSpace: Discovering Interpretable GAN Controls
Erik Härkönen
Aaron Hertzmann
J. Lehtinen
Sylvain Paris
75
900
0
06 Apr 2020
Dataless Model Selection with the Deep Frame Potential
Dataless Model Selection with the Deep Frame Potential
Calvin Murdock
Simon Lucey
41
6
0
30 Mar 2020
Rethinking Depthwise Separable Convolutions: How Intra-Kernel
  Correlations Lead to Improved MobileNets
Rethinking Depthwise Separable Convolutions: How Intra-Kernel Correlations Lead to Improved MobileNets
D. Haase
Manuel Amthor
20
132
0
30 Mar 2020
Previous
12345
Next