ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.06853
  4. Cited By
On the Impact of the Activation Function on Deep Neural Networks
  Training

On the Impact of the Activation Function on Deep Neural Networks Training

19 February 2019
Soufiane Hayou
Arnaud Doucet
Judith Rousseau
    ODL
ArXivPDFHTML

Papers citing "On the Impact of the Activation Function on Deep Neural Networks Training"

45 / 95 papers shown
Title
Deep Learning without Shortcuts: Shaping the Kernel with Tailored
  Rectifiers
Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers
Guodong Zhang
Aleksandar Botev
James Martens
OffRL
34
26
0
15 Mar 2022
Deep Layer-wise Networks Have Closed-Form Weights
Chieh-Tsai Wu
A. Masoomi
Arthur Gretton
Jennifer Dy
29
3
0
01 Feb 2022
Interspecies Collaboration in the Design of Visual Identity: A Case
  Study
Interspecies Collaboration in the Design of Visual Identity: A Case Study
B. Jerbić
M. Švaco
F. Šuligoj
B. Sekoranja
J. Vidaković
...
Borjan Pavlek
Bruno Bolfan
Davor Bruketa
Dina Borosic
Barbara Busic
14
1
0
25 Jan 2022
Gradients are Not All You Need
Gradients are Not All You Need
Luke Metz
C. Freeman
S. Schoenholz
Tal Kachman
30
93
0
10 Nov 2021
Feature Learning and Signal Propagation in Deep Neural Networks
Feature Learning and Signal Propagation in Deep Neural Networks
Yizhang Lou
Chris Mingard
Yoonsoo Nam
Soufiane Hayou
MDE
24
17
0
22 Oct 2021
Bayesian neural network unit priors and generalized Weibull-tail
  property
Bayesian neural network unit priors and generalized Weibull-tail property
M. Vladimirova
Julyan Arbel
Stéphane Girard
BDL
54
9
0
06 Oct 2021
Understanding and Accelerating Neural Architecture Search with
  Training-Free and Theory-Grounded Metrics
Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics
Wuyang Chen
Xinyu Gong
Junru Wu
Yunchao Wei
Humphrey Shi
Zhicheng Yan
Yi Yang
Zhangyang Wang
26
9
0
26 Aug 2021
Neuron Campaign for Initialization Guided by Information Bottleneck
  Theory
Neuron Campaign for Initialization Guided by Information Bottleneck Theory
Haitao Mao
Xu Chen
Qiang Fu
Lun Du
Shi Han
Dongmei Zhang
AI4CE
18
10
0
14 Aug 2021
Deep Stable neural networks: large-width asymptotics and convergence
  rates
Deep Stable neural networks: large-width asymptotics and convergence rates
Stefano Favaro
S. Fortini
Stefano Peluchetti
BDL
30
14
0
02 Aug 2021
Generalized Unsupervised Clustering of Hyperspectral Images of
  Geological Targets in the Near Infrared
Generalized Unsupervised Clustering of Hyperspectral Images of Geological Targets in the Near Infrared
Angela F. Gao
B. Rasmussen
Peter Kulits
E. Scheller
R. Greenberger
B. Ehlmann
25
17
0
24 Jun 2021
Wide stochastic networks: Gaussian limit and PAC-Bayesian training
Wide stochastic networks: Gaussian limit and PAC-Bayesian training
Eugenio Clerico
George Deligiannidis
Arnaud Doucet
25
12
0
17 Jun 2021
Regularization in ResNet with Stochastic Depth
Regularization in ResNet with Stochastic Depth
Soufiane Hayou
Fadhel Ayed
19
10
0
06 Jun 2021
Maximum and Leaky Maximum Propagation
Maximum and Leaky Maximum Propagation
Wolfgang Fuhl
21
5
0
21 May 2021
Multi-layer Perceptron Trainability Explained via Variability
Multi-layer Perceptron Trainability Explained via Variability
Yueyao Yu
Yin Zhang
16
2
0
19 May 2021
Automatic Fault Detection for Deep Learning Programs Using Graph
  Transformations
Automatic Fault Detection for Deep Learning Programs Using Graph Transformations
Amin Nikanjam
Houssem Ben Braiek
Mohammad Mehdi Morovati
Foutse Khomh
GNN
9
24
0
17 May 2021
Activation function design for deep networks: linearity and effective
  initialisation
Activation function design for deep networks: linearity and effective initialisation
Michael Murray
V. Abrol
Jared Tanner
ODL
LLMSV
29
18
0
17 May 2021
Posterior contraction for deep Gaussian process priors
Posterior contraction for deep Gaussian process priors
G. Finocchio
Johannes Schmidt-Hieber
35
11
0
16 May 2021
Finite volume method network for acceleration of unsteady computational
  fluid dynamics: non-reacting and reacting flows
Finite volume method network for acceleration of unsteady computational fluid dynamics: non-reacting and reacting flows
J. Jeon
Juhyeong Lee
S. J. Kim
19
28
0
07 May 2021
Universal scaling laws in the gradient descent training of neural
  networks
Universal scaling laws in the gradient descent training of neural networks
Maksim Velikanov
Dmitry Yarotsky
48
9
0
02 May 2021
InfoNEAT: Information Theory-based NeuroEvolution of Augmenting
  Topologies for Side-channel Analysis
InfoNEAT: Information Theory-based NeuroEvolution of Augmenting Topologies for Side-channel Analysis
R. Acharya
F. Ganji
Domenic Forte
AAML
38
24
0
30 Apr 2021
Using activation histograms to bound the number of affine regions in
  ReLU feed-forward neural networks
Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks
Peter Hinz
14
6
0
31 Mar 2021
Initializing ReLU networks in an expressive subspace of weights
Initializing ReLU networks in an expressive subspace of weights
Dayal Singh
J. SreejithG
10
4
0
23 Mar 2021
Towards Deepening Graph Neural Networks: A GNTK-based Optimization
  Perspective
Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective
Wei Huang
Yayong Li
Weitao Du
Jie Yin
R. Xu
Ling-Hao Chen
Miao Zhang
26
17
0
03 Mar 2021
Neural Architecture Search on ImageNet in Four GPU Hours: A
  Theoretically Inspired Perspective
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
Wuyang Chen
Xinyu Gong
Zhangyang Wang
OOD
45
230
0
23 Feb 2021
Truly Sparse Neural Networks at Scale
Truly Sparse Neural Networks at Scale
Selima Curci
Decebal Constantin Mocanu
Mykola Pechenizkiy
35
19
0
02 Feb 2021
Formalising the Use of the Activation Function in Neural Inference
Formalising the Use of the Activation Function in Neural Inference
D. A. R. Sakthivadivel
13
4
0
02 Feb 2021
Painless step size adaptation for SGD
Painless step size adaptation for SGD
I. Kulikovskikh
Tarzan Legović
28
0
0
01 Feb 2021
Correlated Weights in Infinite Limits of Deep Convolutional Neural
  Networks
Correlated Weights in Infinite Limits of Deep Convolutional Neural Networks
Adrià Garriga-Alonso
Mark van der Wilk
12
4
0
11 Jan 2021
Guiding Neural Network Initialization via Marginal Likelihood
  Maximization
Guiding Neural Network Initialization via Marginal Likelihood Maximization
Anthony S. Tai
Chunfeng Huang
11
0
0
17 Dec 2020
Kernel Dependence Network
Kernel Dependence Network
Chieh-Tsai Wu
A. Masoomi
Arthur Gretton
Jennifer Dy
8
0
0
04 Nov 2020
Stable ResNet
Stable ResNet
Soufiane Hayou
Eugenio Clerico
Bo He
George Deligiannidis
Arnaud Doucet
Judith Rousseau
ODL
SSeg
46
51
0
24 Oct 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
34
79
0
17 Sep 2020
Why to "grow" and "harvest" deep learning models?
Why to "grow" and "harvest" deep learning models?
I. Kulikovskikh
Tarzan Legović
VLM
11
0
0
08 Aug 2020
Doubly infinite residual neural networks: a diffusion process approach
Doubly infinite residual neural networks: a diffusion process approach
Stefano Peluchetti
Stefano Favaro
4
2
0
07 Jul 2020
Activation functions are not needed: the ratio net
Activation functions are not needed: the ratio net
Chi-Chun Zhou
Hai-Long Tu
Yue-Jie Hou
Zhen Ling
Yi Liu
Jian Hua
17
0
0
14 May 2020
Robust Classification of High-Dimensional Spectroscopy Data Using Deep
  Learning and Data Synthesis
Robust Classification of High-Dimensional Spectroscopy Data Using Deep Learning and Data Synthesis
James Houston
F. Glavin
Michael G. Madden
9
40
0
26 Mar 2020
Stable behaviour of infinitely wide deep neural networks
Stable behaviour of infinitely wide deep neural networks
Stefano Favaro
S. Fortini
Stefano Peluchetti
BDL
12
28
0
01 Mar 2020
Robust Pruning at Initialization
Robust Pruning at Initialization
Soufiane Hayou
Jean-François Ton
Arnaud Doucet
Yee Whye Teh
6
46
0
19 Feb 2020
Non-linear Neurons with Human-like Apical Dendrite Activations
Non-linear Neurons with Human-like Apical Dendrite Activations
Mariana-Iuliana Georgescu
Radu Tudor Ionescu
Nicolae-Cătălin Ristea
N. Sebe
13
19
0
02 Feb 2020
Effect of Activation Functions on the Training of Overparametrized
  Neural Nets
Effect of Activation Functions on the Training of Overparametrized Neural Nets
A. Panigrahi
Abhishek Shetty
Navin Goyal
13
20
0
16 Aug 2019
Exact Convergence Rates of the Neural Tangent Kernel in the Large Depth
  Limit
Exact Convergence Rates of the Neural Tangent Kernel in the Large Depth Limit
Soufiane Hayou
Arnaud Doucet
Judith Rousseau
16
4
0
31 May 2019
Infinitely deep neural networks as diffusion processes
Infinitely deep neural networks as diffusion processes
Stefano Peluchetti
Stefano Favaro
ODL
14
31
0
27 May 2019
Sub-Weibull distributions: generalizing sub-Gaussian and sub-Exponential
  properties to heavier-tailed distributions
Sub-Weibull distributions: generalizing sub-Gaussian and sub-Exponential properties to heavier-tailed distributions
M. Vladimirova
Stéphane Girard
Hien Nguyen
Julyan Arbel
17
86
0
13 May 2019
BowTie - A deep learning feedforward neural network for sentiment
  analysis
BowTie - A deep learning feedforward neural network for sentiment analysis
Apostol T. Vassilev
9
5
0
18 Apr 2019
Critical initialisation in continuous approximations of binary neural
  networks
Critical initialisation in continuous approximations of binary neural networks
G. Stamatescu
Federica Gerace
C. Lucibello
I. Fuss
L. White
25
0
0
01 Feb 2019
Previous
12