Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.10064
Cited By
Adaptive Blending Units: Trainable Activation Functions for Deep Neural Networks
26 June 2018
L. R. Sütfeld
Flemming Brieger
Holger Finger
S. Füllhase
G. Pipa
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adaptive Blending Units: Trainable Activation Functions for Deep Neural Networks"
7 / 7 papers shown
Title
Semiring Activation in Neural Networks
B. Smets
Peter D. Donker
Jim W. Portegies
LLMSV
18
0
0
29 May 2024
Nonlinearity Enhanced Adaptive Activation Functions
David Yevick
20
1
0
29 Mar 2024
Learning Specialized Activation Functions for Physics-informed Neural Networks
Honghui Wang
Lu Lu
Shiji Song
Gao Huang
PINN
AI4CE
16
11
0
08 Aug 2023
Bayesian optimization for sparse neural networks with trainable activation functions
M. Fakhfakh
Lotfi Chaari
10
2
0
10 Apr 2023
How important are activation functions in regression and classification? A survey, performance comparison, and future directions
Ameya Dilip Jagtap
George Karniadakis
AI4CE
29
71
0
06 Sep 2022
Activation Functions in Deep Learning: A Comprehensive Survey and Benchmark
S. Dubey
S. Singh
B. B. Chaudhuri
41
641
0
29 Sep 2021
Parametric Flatten-T Swish: An Adaptive Non-linear Activation Function For Deep Learning
Hock Hung Chieng
Noorhaniza Wahid
P. Ong
21
6
0
06 Nov 2020
1