Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.08453
Cited By
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
15 June 2021
Akhilan Boopathy
Ila Fiete
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective"
7 / 7 papers shown
Title
Training neural networks without backpropagation using particles
Deepak Kumar
78
0
0
07 Dec 2024
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Yihua Zhang
Pingzhi Li
Junyuan Hong
Jiaxiang Li
Yimeng Zhang
...
Wotao Yin
Mingyi Hong
Zhangyang Wang
Sijia Liu
Tianlong Chen
25
45
0
18 Feb 2024
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training
Aochuan Chen
Yimeng Zhang
Jinghan Jia
James Diffenderfer
Jiancheng Liu
Konstantinos Parasyris
Yihua Zhang
Zheng-Wei Zhang
B. Kailkhura
Sijia Liu
30
43
0
03 Oct 2023
Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamical Similarity Analysis
Mitchell Ostrow
Adam J. Eisen
L. Kozachkov
Ila Fiete
38
24
0
16 Jun 2023
Implicit Regularization in Feedback Alignment Learning Mechanisms for Neural Networks
Zachary Robertson
Oluwasanmi Koyejo
23
0
0
02 Jun 2023
Polarity is all you need to learn and transfer faster
Qingyang Wang
Michael A. Powell
Ali Geisa
Eric W. Bridgeford
Joshua T. Vogelstein
31
3
0
29 Mar 2023
The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks
Blake Bordelon
C. Pehlevan
41
22
0
05 Oct 2022
1