ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.04245
  4. Cited By
A Variational Perspective on Accelerated Methods in Optimization

A Variational Perspective on Accelerated Methods in Optimization

14 March 2016
Andre Wibisono
Ashia Wilson
Michael I. Jordan
ArXivPDFHTML

Papers citing "A Variational Perspective on Accelerated Methods in Optimization"

50 / 53 papers shown
Title
Hellinger-Kantorovich Gradient Flows: Global Exponential Decay of Entropy Functionals
Hellinger-Kantorovich Gradient Flows: Global Exponential Decay of Entropy Functionals
Alexander Mielke
Jia Jie Zhu
66
1
0
28 Jan 2025
Distributed Event-Based Learning via ADMM
Distributed Event-Based Learning via ADMM
Güner Dilsad Er
Sebastian Trimpe
Michael Muehlebach
FedML
44
2
0
17 May 2024
A General Continuous-Time Formulation of Stochastic ADMM and Its
  Variants
A General Continuous-Time Formulation of Stochastic ADMM and Its Variants
Chris Junchi Li
37
0
0
22 Apr 2024
Leveraging Continuous Time to Understand Momentum When Training Diagonal
  Linear Networks
Leveraging Continuous Time to Understand Momentum When Training Diagonal Linear Networks
Hristo Papazov
Scott Pesme
Nicolas Flammarion
38
5
0
08 Mar 2024
GAD-PVI: A General Accelerated Dynamic-Weight Particle-Based Variational
  Inference Framework
GAD-PVI: A General Accelerated Dynamic-Weight Particle-Based Variational Inference Framework
Fangyikang Wang
Huminhao Zhu
Chao Zhang
Han Zhao
Hui Qian
26
5
0
27 Dec 2023
Quantum Langevin Dynamics for Optimization
Quantum Langevin Dynamics for Optimization
Zherui Chen
Yuchen Lu
Hao Wang
Yizhou Liu
Tongyang Li
AI4CE
21
9
0
27 Nov 2023
Accelerating optimization over the space of probability measures
Accelerating optimization over the space of probability measures
Shi Chen
Wenxuan Wu
Yuhang Yao
Stephen J. Wright
32
5
0
06 Oct 2023
Towards Predicting Equilibrium Distributions for Molecular Systems with
  Deep Learning
Towards Predicting Equilibrium Distributions for Molecular Systems with Deep Learning
Shuxin Zheng
Jiyan He
Chang-Shu Liu
Yu Shi
Ziheng Lu
...
Peiran Jin
Chi Chen
Frank Noé
Haiguang Liu
Tie-Yan Liu
AI4CE
27
41
0
08 Jun 2023
On Underdamped Nesterov's Acceleration
On Underdamped Nesterov's Acceleration
Shu Chen
Bin Shi
Ya-xiang Yuan
27
5
0
28 Apr 2023
Beyond first-order methods for non-convex non-concave min-max
  optimization
Beyond first-order methods for non-convex non-concave min-max optimization
Abhijeet Vyas
Brian Bullins
31
1
0
17 Apr 2023
Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on
  Classical and Recent Developments
Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments
Quoc Tran-Dinh
35
7
0
30 Mar 2023
Quantum Hamiltonian Descent
Quantum Hamiltonian Descent
Jiaqi Leng
Ethan Hickman
Joseph Li
Xiaodi Wu
26
13
0
02 Mar 2023
Accelerated First-Order Optimization under Nonlinear Constraints
Accelerated First-Order Optimization under Nonlinear Constraints
Michael Muehlebach
Michael I. Jordan
48
3
0
01 Feb 2023
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
  Evasion of Non-Degenerate Saddle Points
Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points
Mayank Baranwal
Param Budhraja
V. Raj
A. Hota
33
2
0
07 Dec 2022
Towards Understanding GD with Hard and Conjugate Pseudo-labels for
  Test-Time Adaptation
Towards Understanding GD with Hard and Conjugate Pseudo-labels for Test-Time Adaptation
Jun-Kun Wang
Andre Wibisono
35
7
0
18 Oct 2022
On Quantum Speedups for Nonconvex Optimization via Quantum Tunneling
  Walks
On Quantum Speedups for Nonconvex Optimization via Quantum Tunneling Walks
Yizhou Liu
Weijie J. Su
Tongyang Li
36
18
0
29 Sep 2022
Gradient Norm Minimization of Nesterov Acceleration: $o(1/k^3)$
Gradient Norm Minimization of Nesterov Acceleration: o(1/k3)o(1/k^3)o(1/k3)
Shu Chen
Bin Shi
Ya-xiang Yuan
33
15
0
19 Sep 2022
Conformal Mirror Descent with Logarithmic Divergences
Conformal Mirror Descent with Logarithmic Divergences
Amanjit Kainth
Ting-Kam Leonard Wong
Frank Rudzicz
26
4
0
07 Sep 2022
Multilevel Geometric Optimization for Regularised Constrained Linear
  Inverse Problems
Multilevel Geometric Optimization for Regularised Constrained Linear Inverse Problems
Sebastian Müller
Stefania Petra
Matthias Zisler
AI4CE
21
1
0
11 Jul 2022
Alternating Mirror Descent for Constrained Min-Max Games
Alternating Mirror Descent for Constrained Min-Max Games
Andre Wibisono
Molei Tao
Georgios Piliouras
29
14
0
08 Jun 2022
Perseus: A Simple and Optimal High-Order Method for Variational
  Inequalities
Perseus: A Simple and Optimal High-Order Method for Variational Inequalities
Tianyi Lin
Michael I. Jordan
25
10
0
06 May 2022
Geometric Methods for Sampling, Optimisation, Inference and Adaptive
  Agents
Geometric Methods for Sampling, Optimisation, Inference and Adaptive Agents
Alessandro Barp
Lancelot Da Costa
G. Francca
Karl J. Friston
Mark Girolami
Michael I. Jordan
G. Pavliotis
33
25
0
20 Mar 2022
A More Stable Accelerated Gradient Method Inspired by Continuous-Time
  Perspective
A More Stable Accelerated Gradient Method Inspired by Continuous-Time Perspective
Yasong Feng
Weiguo Gao
23
0
0
09 Dec 2021
Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent
  Flows
Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent Flows
Param Budhraja
Mayank Baranwal
Kunal Garg
A. Hota
15
9
0
02 Dec 2021
No-Regret Dynamics in the Fenchel Game: A Unified Framework for
  Algorithmic Convex Optimization
No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
Kfir Y. Levy
27
21
0
22 Nov 2021
A Deterministic Sampling Method via Maximum Mean Discrepancy Flow with Adaptive Kernel
A Deterministic Sampling Method via Maximum Mean Discrepancy Flow with Adaptive Kernel
Yindong Chen
Yiwei Wang
Lulu Kang
Chun Liu
23
1
0
21 Nov 2021
Convergence and Stability of the Stochastic Proximal Point Algorithm
  with Momentum
Convergence and Stability of the Stochastic Proximal Point Algorithm with Momentum
J. Kim
Panos Toulis
Anastasios Kyrillidis
26
8
0
11 Nov 2021
On Constraints in First-Order Optimization: A View from Non-Smooth
  Dynamical Systems
On Constraints in First-Order Optimization: A View from Non-Smooth Dynamical Systems
Michael Muehlebach
Michael I. Jordan
42
18
0
17 Jul 2021
Revisiting the Role of Euler Numerical Integration on Acceleration and
  Stability in Convex Optimization
Revisiting the Role of Euler Numerical Integration on Acceleration and Stability in Convex Optimization
Peiyuan Zhang
Antonio Orvieto
Hadi Daneshmand
Thomas Hofmann
Roy S. Smith
27
9
0
23 Feb 2021
First-Order Methods for Convex Optimization
First-Order Methods for Convex Optimization
Pavel Dvurechensky
Mathias Staudigl
Shimrit Shtern
ODL
31
25
0
04 Jan 2021
On dissipative symplectic integration with applications to
  gradient-based optimization
On dissipative symplectic integration with applications to gradient-based optimization
G. Francca
Michael I. Jordan
René Vidal
19
47
0
15 Apr 2020
Optimal anytime regret with two experts
Optimal anytime regret with two experts
Nicholas J. A. Harvey
Christopher Liaw
E. Perkins
Sikander Randhawa
6
16
0
20 Feb 2020
Augmented Normalizing Flows: Bridging the Gap Between Generative Flows
  and Latent Variable Models
Augmented Normalizing Flows: Bridging the Gap Between Generative Flows and Latent Variable Models
Chin-Wei Huang
Laurent Dinh
Aaron Courville
DRL
31
87
0
17 Feb 2020
From Nesterov's Estimate Sequence to Riemannian Acceleration
From Nesterov's Estimate Sequence to Riemannian Acceleration
Kwangjun Ahn
S. Sra
22
73
0
24 Jan 2020
Demon: Improved Neural Network Training with Momentum Decay
Demon: Improved Neural Network Training with Momentum Decay
John Chen
Cameron R. Wolfe
Zhaoqi Li
Anastasios Kyrillidis
ODL
24
15
0
11 Oct 2019
Conjugate Gradients and Accelerated Methods Unified: The Approximate
  Duality Gap View
Conjugate Gradients and Accelerated Methods Unified: The Approximate Duality Gap View
Jelena Diakonikolas
L. Orecchia
23
1
0
29 Jun 2019
Continuous Time Analysis of Momentum Methods
Continuous Time Analysis of Momentum Methods
Nikola B. Kovachki
Andrew M. Stuart
23
32
0
10 Jun 2019
Theoretical guarantees for sampling and inference in generative models
  with latent diffusions
Theoretical guarantees for sampling and inference in generative models with latent diffusions
Belinda Tzen
Maxim Raginsky
DiffM
13
99
0
05 Mar 2019
Accelerated Flow for Probability Distributions
Accelerated Flow for Probability Distributions
Amirhossein Taghvaei
P. Mehta
45
31
0
10 Jan 2019
Understanding the Acceleration Phenomenon via High-Resolution
  Differential Equations
Understanding the Acceleration Phenomenon via High-Resolution Differential Equations
Bin Shi
S. Du
Michael I. Jordan
Weijie J. Su
17
254
0
21 Oct 2018
Online Adaptive Methods, Universality and Acceleration
Online Adaptive Methods, Universality and Acceleration
Kfir Y. Levy
A. Yurtsever
V. Cevher
ODL
28
88
0
08 Sep 2018
Towards Riemannian Accelerated Gradient Methods
Towards Riemannian Accelerated Gradient Methods
Hongyi Zhang
S. Sra
13
53
0
07 Jun 2018
Direct Runge-Kutta Discretization Achieves Acceleration
Direct Runge-Kutta Discretization Achieves Acceleration
J.N. Zhang
Aryan Mokhtari
S. Sra
Ali Jadbabaie
19
107
0
01 May 2018
Sampling as optimization in the space of measures: The Langevin dynamics
  as a composite optimization problem
Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem
Andre Wibisono
29
178
0
22 Feb 2018
On Symplectic Optimization
On Symplectic Optimization
M. Betancourt
Michael I. Jordan
Ashia Wilson
22
90
0
10 Feb 2018
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient
  Descent
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
Chi Jin
Praneeth Netrapalli
Michael I. Jordan
ODL
37
261
0
28 Nov 2017
Underdamped Langevin MCMC: A non-asymptotic analysis
Underdamped Langevin MCMC: A non-asymptotic analysis
Xiang Cheng
Niladri S. Chatterji
Peter L. Bartlett
Michael I. Jordan
47
294
0
12 Jul 2017
Stochastic Methods for Composite and Weakly Convex Optimization Problems
Stochastic Methods for Composite and Weakly Convex Optimization Problems
John C. Duchi
Feng Ruan
15
126
0
24 Mar 2017
Stochastic Composite Least-Squares Regression with convergence rate
  O(1/n)
Stochastic Composite Least-Squares Regression with convergence rate O(1/n)
Nicolas Flammarion
Francis R. Bach
27
27
0
21 Feb 2017
Geometric descent method for convex composite minimization
Geometric descent method for convex composite minimization
Shixiang Chen
Shiqian Ma
Wei Liu
36
10
0
29 Dec 2016
12
Next