ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06044
  4. Cited By
Convergence and Alignment of Gradient Descent with Random
  Backpropagation Weights

Convergence and Alignment of Gradient Descent with Random Backpropagation Weights

10 June 2021
Ganlin Song
Ruitu Xu
John D. Lafferty
    ODL
ArXivPDFHTML

Papers citing "Convergence and Alignment of Gradient Descent with Random Backpropagation Weights"

4 / 4 papers shown
Title
Biologically-Motivated Learning Model for Instructed Visual Processing
Biologically-Motivated Learning Model for Instructed Visual Processing
R. Abel
S. Ullman
20
0
0
04 Jun 2023
Exponential Bellman Equation and Improved Regret Bounds for
  Risk-Sensitive Reinforcement Learning
Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning
Yingjie Fei
Zhuoran Yang
Yudong Chen
Zhaoran Wang
39
46
0
06 Nov 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
18
9
0
15 Jun 2021
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
23
61
0
03 Aug 2020
1