ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.08266
  4. Cited By
On the Selection of Initialization and Activation Function for Deep
  Neural Networks

On the Selection of Initialization and Activation Function for Deep Neural Networks

21 May 2018
Soufiane Hayou
Arnaud Doucet
Judith Rousseau
    ODL
ArXivPDFHTML

Papers citing "On the Selection of Initialization and Activation Function for Deep Neural Networks"

9 / 9 papers shown
Title
Criticality versus uniformity in deep neural networks
Criticality versus uniformity in deep neural networks
A. Bukva
Jurriaan de Gier
Kevin T. Grosvenor
R. Jefferson
K. Schalm
Eliot Schwander
31
3
0
10 Apr 2023
A Deep Collocation Method for the Bending Analysis of Kirchhoff Plate
A Deep Collocation Method for the Bending Analysis of Kirchhoff Plate
Hongwei Guo
X. Zhuang
Timon Rabczuk
AI4CE
19
433
0
04 Feb 2021
Tensor Programs III: Neural Matrix Laws
Tensor Programs III: Neural Matrix Laws
Greg Yang
14
43
0
22 Sep 2020
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Tensor Programs II: Neural Tangent Kernel for Any Architecture
Greg Yang
58
134
0
25 Jun 2020
PCW-Net: Pyramid Combination and Warping Cost Volume for Stereo Matching
PCW-Net: Pyramid Combination and Warping Cost Volume for Stereo Matching
Zhelun Shen
Yuchao Dai
Xibin Song
Zhibo Rao
Dingfu Zhou
Liangjun Zhang
43
70
0
23 Jun 2020
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Roman Novak
Lechao Xiao
Jiri Hron
Jaehoon Lee
Alexander A. Alemi
Jascha Narain Sohl-Dickstein
S. Schoenholz
29
224
0
05 Dec 2019
L*ReLU: Piece-wise Linear Activation Functions for Deep Fine-grained
  Visual Categorization
L*ReLU: Piece-wise Linear Activation Functions for Deep Fine-grained Visual Categorization
Mina Basirat
P. Roth
16
8
0
27 Oct 2019
Eigenvalue distribution of nonlinear models of random matrices
Eigenvalue distribution of nonlinear models of random matrices
L. Benigni
Sandrine Péché
20
27
0
05 Apr 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
233
348
0
14 Jun 2018
1