ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.04682
  4. Cited By
SMU: smooth activation function for deep networks using smoothing
  maximum technique

SMU: smooth activation function for deep networks using smoothing maximum technique

8 November 2021
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
ArXivPDFHTML

Papers citing "SMU: smooth activation function for deep networks using smoothing maximum technique"

4 / 4 papers shown
Title
Associative memory and dead neurons
Associative memory and dead neurons
V. Fanaskov
Ivan Oseledets
49
1
0
02 Oct 2024
Efficient Federated Learning via Local Adaptive Amended Optimizer with
  Linear Speedup
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup
Yan Sun
Li Shen
Hao Sun
Liang Ding
Dacheng Tao
FedML
24
17
0
30 Jul 2023
Interaction-Aware Trajectory Planning for Autonomous Vehicles with
  Analytic Integration of Neural Networks into Model Predictive Control
Interaction-Aware Trajectory Planning for Autonomous Vehicles with Analytic Integration of Neural Networks into Model Predictive Control
Piyush Gupta
David Isele
Donggun Lee
S. Bae
39
19
0
13 Jan 2023
ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions
ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
51
13
0
09 Sep 2021
1