ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.04285
  4. Cited By
Continuous Time Analysis of Momentum Methods

Continuous Time Analysis of Momentum Methods

10 June 2019
Nikola B. Kovachki
Andrew M. Stuart
ArXivPDFHTML

Papers citing "Continuous Time Analysis of Momentum Methods"

12 / 12 papers shown
Title
Nesterov Acceleration for Ensemble Kalman Inversion and Variants
Nesterov Acceleration for Ensemble Kalman Inversion and Variants
Sydney Vernon
Eviatar Bach
Oliver R. A. Dunbar
44
1
0
15 Jan 2025
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Matthew Burns
Qingyuan Hou
Michael Huang
146
1
0
08 Oct 2024
Who breaks early, looses: goal oriented training of deep neural networks
  based on port Hamiltonian dynamics
Who breaks early, looses: goal oriented training of deep neural networks based on port Hamiltonian dynamics
Julian Burghoff
Marc Heinrich Monells
Hanno Gottschalk
16
0
0
14 Apr 2023
From Optimization to Sampling Through Gradient Flows
From Optimization to Sampling Through Gradient Flows
Nicolas García Trillos
B. Hosseini
D. Sanz-Alonso
23
11
0
22 Feb 2023
On a continuous time model of gradient descent dynamics and instability
  in deep learning
On a continuous time model of gradient descent dynamics and instability in deep learning
Mihaela Rosca
Yan Wu
Chongli Qin
Benoit Dherin
18
6
0
03 Feb 2023
Implicit regularization in Heavy-ball momentum accelerated stochastic
  gradient descent
Implicit regularization in Heavy-ball momentum accelerated stochastic gradient descent
Avrajit Ghosh
He Lyu
Xitong Zhang
Rongrong Wang
53
20
0
02 Feb 2023
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
  Evasion of Non-Degenerate Saddle Points
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points
Mayank Baranwal
Param Budhraja
V. Raj
A. Hota
33
2
0
07 Dec 2022
Toward Equation of Motion for Deep Neural Networks: Continuous-time
  Gradient Descent and Discretization Error Analysis
Toward Equation of Motion for Deep Neural Networks: Continuous-time Gradient Descent and Discretization Error Analysis
Taiki Miyagawa
50
9
0
28 Oct 2022
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
27
4
0
13 Oct 2022
A Continuous-time Stochastic Gradient Descent Method for Continuous Data
A Continuous-time Stochastic Gradient Descent Method for Continuous Data
Kexin Jin
J. Latz
Chenguang Liu
Carola-Bibiane Schönlieb
23
9
0
07 Dec 2021
Heavy Ball Neural Ordinary Differential Equations
Heavy Ball Neural Ordinary Differential Equations
Hedi Xia
Vai Suliafu
H. Ji
T. Nguyen
Andrea L. Bertozzi
Stanley J. Osher
Bao Wang
38
56
0
10 Oct 2021
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,154
0
04 Mar 2015
1