ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.08034
  4. Cited By
Never look back - A modified EnKF method and its application to the
  training of neural networks without back propagation

Never look back - A modified EnKF method and its application to the training of neural networks without back propagation

21 May 2018
E. Haber
F. Lucka
Lars Ruthotto
ArXivPDFHTML

Papers citing "Never look back - A modified EnKF method and its application to the training of neural networks without back propagation"

7 / 7 papers shown
Title
Training neural networks without backpropagation using particles
Training neural networks without backpropagation using particles
Deepak Kumar
80
0
0
07 Dec 2024
Gradient-free training of neural ODEs for system identification and
  control using ensemble Kalman inversion
Gradient-free training of neural ODEs for system identification and control using ensemble Kalman inversion
Lucas Böttcher
BDL
24
11
0
15 Jul 2023
Second Order Ensemble Langevin Method for Sampling and Inverse Problems
Second Order Ensemble Langevin Method for Sampling and Inverse Problems
Ziming Liu
Andrew M. Stuart
Yixuan Wang
27
7
0
09 Aug 2022
Stable Anderson Acceleration for Deep Learning
Stable Anderson Acceleration for Deep Learning
Massimiliano Lupo Pasini
Junqi Yin
Viktor Reshniak
M. Stoyanov
23
4
0
26 Oct 2021
Mean-Field and Kinetic Descriptions of Neural Differential Equations
Mean-Field and Kinetic Descriptions of Neural Differential Equations
Michael Herty
T. Trimborn
G. Visconti
36
6
0
07 Jan 2020
Ensemble Kalman Inversion: A Derivative-Free Technique For Machine
  Learning Tasks
Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks
Nikola B. Kovachki
Andrew M. Stuart
BDL
44
136
0
10 Aug 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
310
2,896
0
15 Sep 2016
1