ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.04638
49
31
v1v2v3 (latest)

Proximal Backpropagation

14 June 2017
Thomas Frerix
Thomas Möllenhoff
Michael Möller
Daniel Cremers
ArXiv (abs)PDFHTML
Abstract

We offer a generalized point of view on the backpropagation algorithm, currently the most common technique to train neural networks via stochastic gradient descent and variants thereof. Specifically, we show that backpropagation of a prediction error is equivalent to sequential gradient descent steps on a quadratic penalty energy. This energy comprises the network activations as variables of the optimization and couples them to the network parameters. Based on this viewpoint, we illustrate the limitations on step sizes when optimizing a nested function with gradient descent. Rather than taking explicit gradient steps, where step size restrictions are an impediment for optimization, we propose proximal backpropagation (ProxProp) as a novel algorithm that takes implicit gradient steps to update the network parameters. We experimentally demonstrate that our algorithm is robust in the sense that it decreases the objective function for a wide range of parameter values. In a systematic quantitative analysis, we compare to related approaches on a supervised visual learning task (CIFAR-10) for fully connected as well as convolutional neural networks and for an unsupervised autoencoder (USPS dataset). We demonstrate that ProxProp leads to a significant speed up in training performance.

View on arXiv
Comments on this paper