ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09693
  4. Cited By
Orthogonal-Padé Activation Functions: Trainable Activation functions
  for smooth and faster convergence in deep networks

Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks

17 June 2021
Koushik Biswas
Shilpak Banerjee
A. Pandey
    ODL
ArXiv (abs)PDFHTML

Papers citing "Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks"

20 / 20 papers shown
Title
EIS -- a family of activation functions combining Exponential, ISRU, and
  Softplus
EIS -- a family of activation functions combining Exponential, ISRU, and Softplus
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
18
2
0
28 Sep 2020
TanhSoft -- a family of activation functions combining Tanh and Softplus
TanhSoft -- a family of activation functions combining Tanh and Softplus
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
52
5
0
08 Sep 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
556
42,639
0
03 Dec 2019
Padé Activation Units: End-to-end Learning of Flexible Activation
  Functions in Deep Networks
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Alejandro Molina
P. Schramowski
Kristian Kersting
ODL
44
82
0
15 Jul 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DVMedIm
172
18,193
0
28 May 2019
Universal Approximation with Deep Narrow Networks
Universal Approximation with Deep Narrow Networks
Patrick Kidger
Terry Lyons
134
334
0
21 May 2019
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture
  Design
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
Ningning Ma
Xiangyu Zhang
Haitao Zheng
Jian Sun
181
5,012
0
30 Jul 2018
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
213
19,353
0
13 Jan 2018
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
285
8,928
0
25 Aug 2017
Deep Layer Aggregation
Deep Layer Aggregation
Feng Yu
Dequan Wang
Evan Shelhamer
Trevor Darrell
AI4CEFAtt
161
1,333
0
20 Jul 2017
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN3DV
816
36,892
0
25 Aug 2016
Gaussian Error Linear Units (GELUs)
Gaussian Error Linear Units (GELUs)
Dan Hendrycks
Kevin Gimpel
174
5,049
0
27 Jun 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
356
8,002
0
23 May 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,196
0
16 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.3K
194,510
0
10 Dec 2015
SSD: Single Shot MultiBox Detector
SSD: Single Shot MultiBox Detector
Wen Liu
Dragomir Anguelov
D. Erhan
Christian Szegedy
Scott E. Reed
Cheng-Yang Fu
Alexander C. Berg
ObjDBDL
257
29,879
0
08 Dec 2015
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
307
5,536
0
23 Nov 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
467
43,347
0
11 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
350
18,654
0
06 Feb 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.1K
150,364
0
22 Dec 2014
1