9
8

Improving the Backpropagation Algorithm with Consequentialism Weight Updates over Mini-Batches

Abstract

Many attempts took place to improve the adaptive filters that can also be useful to improve backpropagation (BP). Normalized least mean squares (NLMS) is one of the most successful algorithms derived from Least mean squares (LMS). However, its extension to multi-layer neural networks has not happened before. Here, we first show that it is possible to consider a multi-layer neural network as a stack of adaptive filters. Additionally, we introduce more comprehensible interpretations of NLMS than the complicated geometric interpretation in affine projection algorithm (APA) for a single fully-connected (FC) layer that can easily be generalized to, for instance, convolutional neural networks and also works better with mini-batch training. With this new viewpoint, we introduce a better algorithm by predicting then emending the adverse consequences of the actions that take place in BP even before they happen. Finally, the proposed method is compatible with stochastic gradient descent (SGD) and applicable to momentum-based derivatives such as RMSProp, Adam, and NAG. Our experiments show the usefulness of our algorithm in the training of deep neural networks.

View on arXiv
Comments on this paper