Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.06119
Cited By
APTx: better activation function than MISH, SWISH, and ReLU's variants used in deep learning
10 September 2022
Ravin Kumar
Re-assign community
ArXiv
PDF
HTML
Papers citing
"APTx: better activation function than MISH, SWISH, and ReLU's variants used in deep learning"
6 / 6 papers shown
Title
Improving Classification Neural Networks by using Absolute activation function (MNIST/LeNET-5 example)
Oleg I.Berngardt
66
2
0
23 Apr 2023
Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks
Tomasz Szandała
98
287
0
15 Oct 2020
Deep Learning using Rectified Linear Units (ReLU)
Abien Fred Agarap
56
3,212
0
22 Mar 2018
Searching for Activation Functions
Prajit Ramachandran
Barret Zoph
Quoc V. Le
62
606
0
16 Oct 2017
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
271
5,518
0
23 Nov 2015
Deeply learned face representations are sparse, selective, and robust
Yi Sun
Xiaogang Wang
Xiaoou Tang
CVBM
310
922
0
03 Dec 2014
1