ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.03179
  4. Cited By
Discovering Parametric Activation Functions

Discovering Parametric Activation Functions

5 June 2020
G. Bingham
Risto Miikkulainen
    ODL
ArXivPDFHTML

Papers citing "Discovering Parametric Activation Functions"

31 / 31 papers shown
Title
Cauchy activation function and XNet
Cauchy activation function and XNet
Xin Li
Zhihong Xia
Hongkun Zhang
165
6
0
28 Sep 2024
KAN: Kolmogorov-Arnold Networks
KAN: Kolmogorov-Arnold Networks
Ziming Liu
Yixuan Wang
Sachin Vaidya
Fabian Ruehle
James Halverson
Marin Soljacic
Thomas Y. Hou
Max Tegmark
220
538
0
30 Apr 2024
Effective Regularization Through Loss-Function Metalearning
Effective Regularization Through Loss-Function Metalearning
Santiago Gonzalez
Xin Qiu
Risto Miikkulainen
105
0
0
02 Oct 2020
Smooth Adversarial Training
Smooth Adversarial Training
Cihang Xie
Mingxing Tan
Boqing Gong
Alan Yuille
Quoc V. Le
OOD
75
153
0
25 Jun 2020
SPLASH: Learnable Activation Functions for Improving Accuracy and
  Adversarial Robustness
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness
Mohammadamin Tavakoli
Forest Agostinelli
Pierre Baldi
AAML
FAtt
172
39
0
16 Jun 2020
Evolving Normalization-Activation Layers
Evolving Normalization-Activation Layers
Hanxiao Liu
Andrew Brock
Karen Simonyan
Quoc V. Le
84
80
0
06 Apr 2020
Evolutionary Optimization of Deep Learning Activation Functions
Evolutionary Optimization of Deep Learning Activation Functions
G. Bingham
William Macke
Risto Miikkulainen
ODL
41
50
0
17 Feb 2020
Optimizing Loss Functions Through Multivariate Taylor Polynomial
  Parameterization
Optimizing Loss Functions Through Multivariate Taylor Polynomial Parameterization
Santiago Gonzalez
Risto Miikkulainen
41
9
0
31 Jan 2020
Mish: A Self Regularized Non-Monotonic Activation Function
Mish: A Self Regularized Non-Monotonic Activation Function
Diganta Misra
63
679
0
23 Aug 2019
Padé Activation Units: End-to-end Learning of Flexible Activation
  Functions in Deep Networks
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Alejandro Molina
P. Schramowski
Kristian Kersting
ODL
34
80
0
15 Jul 2019
Towards Explaining the Regularization Effect of Initial Large Learning
  Rate in Training Neural Networks
Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks
Yuanzhi Li
Colin Wei
Tengyu Ma
44
299
0
10 Jul 2019
Learning Activation Functions: A new paradigm for understanding Neural
  Networks
Learning Activation Functions: A new paradigm for understanding Neural Networks
Mohit Goyal
R. Goyal
Brejesh Lall
65
65
0
23 Jun 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
131
18,106
0
28 May 2019
Improved Training Speed, Accuracy, and Data Utilization Through Loss
  Function Optimization
Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization
Santiago Gonzalez
Risto Miikkulainen
63
76
0
27 May 2019
A Survey on Neural Architecture Search
A Survey on Neural Architecture Search
Martin Wistuba
Ambrish Rawat
Tejaswini Pedapati
AI4CE
54
258
0
04 May 2019
Activation Functions: Comparison of trends in Practice and Research for
  Deep Learning
Activation Functions: Comparison of trends in Practice and Research for Deep Learning
S. Bodenstedt
Dominik Rivoir
A. Gachagan
S. T. Mees
91
1,278
0
08 Nov 2018
The Quest for the Golden Activation Function
The Quest for the Golden Activation Function
Mina Basirat
P. Roth
55
53
0
02 Aug 2018
Regularized Evolution for Image Classifier Architecture Search
Regularized Evolution for Image Classifier Architecture Search
Esteban Real
A. Aggarwal
Yanping Huang
Quoc V. Le
150
3,025
0
05 Feb 2018
Self-Normalizing Neural Networks
Self-Normalizing Neural Networks
Günter Klambauer
Thomas Unterthiner
Andreas Mayr
Sepp Hochreiter
444
2,511
0
08 Jun 2017
Sigmoid-Weighted Linear Units for Neural Network Function Approximation
  in Reinforcement Learning
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Stefan Elfwing
E. Uchibe
Kenji Doya
128
1,717
0
10 Feb 2017
Gaussian Error Linear Units (GELUs)
Gaussian Error Linear Units (GELUs)
Dan Hendrycks
Kevin Gimpel
167
4,994
0
27 Jun 2016
TensorFlow: A system for large-scale machine learning
TensorFlow: A system for large-scale machine learning
Martín Abadi
P. Barham
Jianmin Chen
Zhiwen Chen
Andy Davis
...
Vijay Vasudevan
Pete Warden
Martin Wicke
Yuan Yu
Xiaoqiang Zhang
GNN
AI4CE
425
18,346
0
27 May 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
328
7,980
0
23 May 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
348
10,180
0
16 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.1K
193,814
0
10 Dec 2015
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
293
5,521
0
23 Nov 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
441
43,277
0
11 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
300
18,609
0
06 Feb 2015
Learning Activation Functions to Improve Deep Neural Networks
Learning Activation Functions to Improve Deep Neural Networks
Forest Agostinelli
Matthew Hoffman
Peter Sadowski
Pierre Baldi
ODL
213
475
0
21 Dec 2014
Striving for Simplicity: The All Convolutional Net
Striving for Simplicity: The All Convolutional Net
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
FAtt
242
4,667
0
21 Dec 2014
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
443
7,660
0
03 Jul 2012
1