ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.06119
  4. Cited By
APTx: better activation function than MISH, SWISH, and ReLU's variants used in deep learning

APTx: better activation function than MISH, SWISH, and ReLU's variants used in deep learning

10 September 2022
Ravin Kumar
ArXivPDFHTML

Papers citing "APTx: better activation function than MISH, SWISH, and ReLU's variants used in deep learning"

6 / 6 papers shown
Title
Improving Classification Neural Networks by using Absolute activation
  function (MNIST/LeNET-5 example)
Improving Classification Neural Networks by using Absolute activation function (MNIST/LeNET-5 example)
Oleg I.Berngardt
66
2
0
23 Apr 2023
Review and Comparison of Commonly Used Activation Functions for Deep
  Neural Networks
Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks
Tomasz Szandała
98
287
0
15 Oct 2020
Deep Learning using Rectified Linear Units (ReLU)
Deep Learning using Rectified Linear Units (ReLU)
Abien Fred Agarap
56
3,212
0
22 Mar 2018
Searching for Activation Functions
Searching for Activation Functions
Prajit Ramachandran
Barret Zoph
Quoc V. Le
62
606
0
16 Oct 2017
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
271
5,518
0
23 Nov 2015
Deeply learned face representations are sparse, selective, and robust
Deeply learned face representations are sparse, selective, and robust
Yi Sun
Xiaogang Wang
Xiaoou Tang
CVBM
310
922
0
03 Dec 2014
1