ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.11272
17
16

Empirical analysis of non-linear activation functions for Deep Neural Networks in classification tasks

30 October 2017
Giovanni Alcantara
ArXivPDFHTML
Abstract

We provide an overview of several non-linear activation functions in a neural network architecture that have proven successful in many machine learning applications. We conduct an empirical analysis on the effectiveness of using these function on the MNIST classification task, with the aim of clarifying which functions produce the best results overall. Based on this first set of results, we examine the effects of building deeper architectures with an increasing number of hidden layers. We also survey the impact of using, on the same task, different initialisation schemes for the weights of our neural network. Using these sets of experiments as a base, we conclude by providing a optimal neural network architecture that yields impressive results in accuracy on the MNIST classification task.

View on arXiv
Comments on this paper