ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02565
  4. Cited By
Continuous-time Models for Stochastic Optimization Algorithms

Continuous-time Models for Stochastic Optimization Algorithms

5 October 2018
Antonio Orvieto
Aurelien Lucchi
ArXivPDFHTML

Papers citing "Continuous-time Models for Stochastic Optimization Algorithms"

9 / 9 papers shown
Title
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Matthew Burns
Qingyuan Hou
Michael Huang
137
1
0
08 Oct 2024
An SDE for Modeling SAM: Theory and Insights
An SDE for Modeling SAM: Theory and Insights
Enea Monzio Compagnoni
Luca Biggio
Antonio Orvieto
F. Proske
Hans Kersting
Aurelien Lucchi
23
13
0
19 Jan 2023
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
  Evasion of Non-Degenerate Saddle Points
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points
Mayank Baranwal
Param Budhraja
V. Raj
A. Hota
33
2
0
07 Dec 2022
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent
From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
27
4
0
13 Oct 2022
Understanding A Class of Decentralized and Federated Optimization
  Algorithms: A Multi-Rate Feedback Control Perspective
Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective
Xinwei Zhang
Mingyi Hong
N. Elia
FedML
13
3
0
27 Apr 2022
Free-rider Attacks on Model Aggregation in Federated Learning
Free-rider Attacks on Model Aggregation in Federated Learning
Yann Fraboni
Richard Vidal
Marco Lorenzi
FedML
6
124
0
21 Jun 2020
Shadowing Properties of Optimization Algorithms
Shadowing Properties of Optimization Algorithms
Antonio Orvieto
Aurelien Lucchi
30
18
0
12 Nov 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
108
1,154
0
04 Mar 2015
1