ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.06488
  4. Cited By
Feedback alignment in deep convolutional networks

Feedback alignment in deep convolutional networks

12 December 2018
Theodore H. Moskovitz
Ashok Litwin-Kumar
L. F. Abbott
ArXivPDFHTML

Papers citing "Feedback alignment in deep convolutional networks"

16 / 16 papers shown
Title
Learning with augmented target information: An alternative theory of
  Feedback Alignment
Learning with augmented target information: An alternative theory of Feedback Alignment
Huzi Cheng
Joshua W. Brown
CVBM
21
0
0
03 Apr 2023
Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic
  Neurons
Dual Propagation: Accelerating Contrastive Hebbian Learning with Dyadic Neurons
R. Høier
D. Staudt
Christopher Zach
31
11
0
02 Feb 2023
Towards Scaling Difference Target Propagation by Learning Backprop
  Targets
Towards Scaling Difference Target Propagation by Learning Backprop Targets
M. Ernoult
Fabrice Normandin
A. Moudgil
Sean Spinney
Eugene Belilovsky
Irina Rish
Blake A. Richards
Yoshua Bengio
19
28
0
31 Jan 2022
Benchmarking the Accuracy and Robustness of Feedback Alignment
  Algorithms
Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms
Albert Jiménez Sanfiz
Mohamed Akrout
OOD
AAML
10
8
0
30 Aug 2021
Towards Biologically Plausible Convolutional Networks
Towards Biologically Plausible Convolutional Networks
Roman Pogodin
Yash Mehta
Timothy Lillicrap
P. Latham
26
22
0
22 Jun 2021
Credit Assignment in Neural Networks through Deep Feedback Control
Credit Assignment in Neural Networks through Deep Feedback Control
Alexander Meulemans
Matilde Tristany Farinha
Javier García Ordónez
Pau Vilimelis Aceituno
João Sacramento
Benjamin Grewe
31
35
0
15 Jun 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
34
3
0
09 Jan 2021
Align, then memorise: the dynamics of learning with feedback alignment
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
26
36
0
24 Nov 2020
Biological credit assignment through dynamic inversion of feedforward
  networks
Biological credit assignment through dynamic inversion of feedforward networks
William F. Podlaski
C. Machens
11
19
0
10 Jul 2020
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and
  Architectures
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Julien Launay
Iacopo Poli
Franccois Boniface
Florent Krzakala
36
62
0
23 Jun 2020
Kernelized information bottleneck leads to biologically plausible
  3-factor Hebbian learning in deep networks
Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Roman Pogodin
P. Latham
24
34
0
12 Jun 2020
Two Routes to Scalable Credit Assignment without Weight Symmetry
Two Routes to Scalable Credit Assignment without Weight Symmetry
D. Kunin
Aran Nayebi
Javier Sagastuy-Breña
Surya Ganguli
Jonathan M. Bloom
Daniel L. K. Yamins
26
31
0
28 Feb 2020
Spike-based causal inference for weight alignment
Spike-based causal inference for weight alignment
Jordan Guerguiev
Konrad Paul Kording
Blake A. Richards
CML
19
23
0
03 Oct 2019
Deep Learning without Weight Transport
Deep Learning without Weight Transport
Mohamed Akrout
Collin Wilson
Peter C. Humphreys
Timothy Lillicrap
D. Tweed
CVBM
18
131
0
10 Apr 2019
Training Neural Networks with Local Error Signals
Training Neural Networks with Local Error Signals
Arild Nøkland
L. Eidnes
18
225
0
20 Jan 2019
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
240
348
0
14 Jun 2018
1