ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.04618
  4. Cited By
Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for
  Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and
  Momentum-Based Acceleration

Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration

12 September 2018
Xuefeng Gao
Mert Gurbuzbalaban
Lingjiong Zhu
ArXivPDFHTML

Papers citing "Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration"

17 / 17 papers shown
Title
A General Continuous-Time Formulation of Stochastic ADMM and Its
  Variants
A General Continuous-Time Formulation of Stochastic ADMM and Its Variants
Chris Junchi Li
23
0
0
22 Apr 2024
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping
Umut Simsekli
Mert Gurbuzbalaban
S. Yıldırım
Lingjiong Zhu
38
2
0
04 Mar 2024
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic
  Gradient Descent
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent
Lingjiong Zhu
Mert Gurbuzbalaban
Anant Raj
Umut Simsekli
24
6
0
20 May 2023
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than
  Constant Stepsize
Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize
Mert Gurbuzbalaban
Yuanhan Hu
Umut Simsekli
Lingjiong Zhu
LRM
11
1
0
10 Feb 2023
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions
Anant Raj
Lingjiong Zhu
Mert Gurbuzbalaban
Umut Simsekli
21
15
0
27 Jan 2023
Kinetic Langevin MCMC Sampling Without Gradient Lipschitz Continuity --
  the Strongly Convex Case
Kinetic Langevin MCMC Sampling Without Gradient Lipschitz Continuity -- the Strongly Convex Case
Tim Johnston
Iosif Lytras
Sotirios Sabanis
30
8
0
19 Jan 2023
Global convergence of optimized adaptive importance samplers
Global convergence of optimized adaptive importance samplers
Ömer Deniz Akyildiz
25
7
0
02 Jan 2022
Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian
  Monte Carlo
Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo
Vyacheslav Kungurtsev
Adam D. Cobb
T. Javidi
Brian Jalaian
51
4
0
15 Jul 2021
A Unifying and Canonical Description of Measure-Preserving Diffusions
A Unifying and Canonical Description of Measure-Preserving Diffusions
Alessandro Barp
So Takao
M. Betancourt
Alexis Arnaudon
Mark Girolami
20
17
0
06 May 2021
The shifted ODE method for underdamped Langevin MCMC
The shifted ODE method for underdamped Langevin MCMC
James Foster
Terry Lyons
Harald Oberhauser
16
16
0
10 Jan 2021
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Umut Simsekli
Ozan Sener
George Deligiannidis
Murat A. Erdogdu
41
55
0
16 Jun 2020
Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo
  under local conditions for nonconvex optimization
Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization
Ömer Deniz Akyildiz
Sotirios Sabanis
35
17
0
13 Feb 2020
Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning
Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning
Huy N. Chau
M. Rásonyi
19
10
0
25 Mar 2019
Understanding the Acceleration Phenomenon via High-Resolution
  Differential Equations
Understanding the Acceleration Phenomenon via High-Resolution Differential Equations
Bin Shi
S. Du
Michael I. Jordan
Weijie J. Su
17
251
0
21 Oct 2018
On the Convergence of Stochastic Gradient MCMC Algorithms with
  High-Order Integrators
On the Convergence of Stochastic Gradient MCMC Algorithms with High-Order Integrators
Changyou Chen
Nan Ding
Lawrence Carin
34
158
0
21 Oct 2016
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
102
1,152
0
04 Mar 2015
MCMC using Hamiltonian dynamics
MCMC using Hamiltonian dynamics
Radford M. Neal
185
3,262
0
09 Jun 2012
1