Forward gradient descent (FGD) has been proposed as a biologically more plausible alternative of gradient descent as it can be computed without backward pass. Considering the linear model with parameters, previous work has found that the prediction error of FGD is, however, by a factor slower than the prediction error of stochastic gradient descent (SGD). In this paper we show that by computing FGD steps based on each training sample, this suboptimality factor becomes and thus the suboptimality of the rate disappears if We also show that FGD with repeated sampling can adapt to low-dimensional structure in the input distribution. The main mathematical challenge lies in controlling the dependencies arising from the repeated sampling process.
View on arXiv