ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1508.00330
  4. Cited By
On the Importance of Normalisation Layers in Deep Learning with
  Piecewise Linear Activation Units
v1v2 (latest)

On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units

3 August 2015
Zhibin Liao
G. Carneiro
ArXiv (abs)PDFHTML

Papers citing "On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units"

7 / 7 papers shown
Title
Shrinkage Initialization for Smooth Learning of Neural Networks
Shrinkage Initialization for Smooth Learning of Neural Networks
Miao Cheng
Feiyan Zhou
Hongwei Zou
Limin Wang
AI4CE
69
0
0
12 Apr 2025
MatConvNet - Convolutional Neural Networks for MATLAB
MatConvNet - Convolutional Neural Networks for MATLAB
Andrea Vedaldi
Karel Lenc
333
2,950
0
15 Dec 2014
Deeply-Supervised Nets
Deeply-Supervised Nets
Chen-Yu Lee
Saining Xie
Patrick W. Gallagher
Zhengyou Zhang
Zhuowen Tu
354
2,243
0
18 Sep 2014
On the Number of Linear Regions of Deep Neural Networks
On the Number of Linear Regions of Deep Neural Networks
Guido Montúfar
Razvan Pascanu
Kyunghyun Cho
Yoshua Bengio
98
1,256
0
08 Feb 2014
Network In Network
Network In Network
Min Lin
Qiang Chen
Shuicheng Yan
309
6,290
0
16 Dec 2013
Maxout Networks
Maxout Networks
Ian Goodfellow
David Warde-Farley
M. Berk Mirza
Aaron Courville
Yoshua Bengio
OOD
283
2,179
0
18 Feb 2013
Stochastic Pooling for Regularization of Deep Convolutional Neural
  Networks
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Matthew D. Zeiler
Rob Fergus
203
990
0
16 Jan 2013
1