ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.13501
  4. Cited By
EIS -- a family of activation functions combining Exponential, ISRU, and
  Softplus
v1v2 (latest)

EIS -- a family of activation functions combining Exponential, ISRU, and Softplus

28 September 2020
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
ArXiv (abs)PDFHTML

Papers citing "EIS -- a family of activation functions combining Exponential, ISRU, and Softplus"

19 / 19 papers shown
Title
TanhSoft -- a family of activation functions combining Tanh and Softplus
TanhSoft -- a family of activation functions combining Tanh and Softplus
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
52
5
0
08 Sep 2020
Soft-Root-Sign Activation Function
Soft-Root-Sign Activation Function
Yuan Zhou
Dandan Li
Shuwei Huo
S. Kung
ODL
60
25
0
01 Mar 2020
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
213
19,353
0
13 Jan 2018
Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs)
Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs)
B. Carlile
G. Delamarter
Paul Kinney
Akiko Marti
Brian Whitney
53
42
0
27 Oct 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
285
8,928
0
25 Aug 2017
DeepArchitect: Automatically Designing and Training Deep Architectures
DeepArchitect: Automatically Designing and Training Deep Architectures
Renato M. P. Negrinho
Geoffrey J. Gordon
97
186
0
28 Apr 2017
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
1.2K
20,900
0
17 Apr 2017
Sigmoid-Weighted Linear Units for Neural Network Function Approximation
  in Reinforcement Learning
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Stefan Elfwing
E. Uchibe
Kenji Doya
141
1,746
0
10 Feb 2017
Designing Neural Network Architectures using Reinforcement Learning
Designing Neural Network Architectures using Reinforcement Learning
Bowen Baker
O. Gupta
Nikhil Naik
Ramesh Raskar
132
1,472
0
07 Nov 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN3DV
822
36,892
0
25 Aug 2016
Lets keep it simple, Using simple architectures to outperform deeper and
  more complex architectures
Lets keep it simple, Using simple architectures to outperform deeper and more complex architectures
S. H. HasanPour
Mohammad Rouhani
Mohsen Fayyaz
Mohammad Sabokrou
82
120
0
22 Aug 2016
Gaussian Error Linear Units (GELUs)
Gaussian Error Linear Units (GELUs)
Dan Hendrycks
Kevin Gimpel
174
5,049
0
27 Jun 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
356
8,002
0
23 May 2016
Rethinking the Inception Architecture for Computer Vision
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DVBDL
886
27,427
0
02 Dec 2015
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
307
5,536
0
23 Nov 2015
Empirical Evaluation of Rectified Activations in Convolutional Network
Empirical Evaluation of Rectified Activations in Convolutional Network
Bing Xu
Naiyan Wang
Tianqi Chen
Mu Li
142
2,914
0
05 May 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
467
43,347
0
11 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
350
18,654
0
06 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.1K
150,364
0
22 Dec 2014
1