Revise Saturated Activation Functions

It has been generally believed that training deep neural networks is hard with saturated activation functions, including Sigmoid and Tanh. Recent works shows that deep Tanh networks are able to converge with careful model initialization while deep Sigmoid networks still fail. In this paper, we propose a re-scaled Sigmoid function which is able to maintain the gradient in a stable scale. In addition, we break the symmetry of Tanh by penalizing the negative part. Our preliminary results on deep convolution networks shown that, even without stabilization technologies such as batch normalization and sophisticated initialization, the "re-scaled Sigmoid" converges to local optimality robustly. Furthermore the "leaky Tanh" is comparable or even outperforms the state-of-the-art non-saturated activation functions such as ReLU and leaky ReLU.
View on arXiv