Reinforced backpropagation for deep neural network learning
- ODL

Standard error backpropagation is used in almost all modern neural network learnings for minimizing training errors with respect to network parameters. However, it typically suffers from proliferation of saddle points in the high-dimensional parameter space. Therefore, it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a parameter region of better generalization capabilities. Here, we propose a simple extension of the backpropagation, namely reinforced backpropagation, which simply adds previous first-order gradients in a stochastic manner with a probability that increases with learning time. As verified in a simple synthetic dataset, this method significantly accelerates learning compared to standard backpropagation. Surprisingly, it dramatically reduces over-fitting effects, even compared to state-of-the-art adaptive learning algorithm---Adam. For a benchmark handwritten dataset, the learning performance is comparable to Adam, yet with an extra advantage of requiring one-fold less computer memory. Overall, our method introduces stochastic memory into gradients, which may be an important starting point to understand how gradient-based training algorithms can work and its relationship with the generalization abilities of deep networks.
View on arXiv