ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.05575
  4. Cited By
A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics

A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics

18 February 2017
Yuchen Zhang
Percy Liang
Moses Charikar
ArXivPDFHTML

Papers citing "A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics"

23 / 23 papers shown
Title
Learning Noisy Halfspaces with a Margin: Massart is No Harder than Random
Gautam Chandrasekaran
Vasilis Kontonis
Konstantinos Stavropoulos
Kevin Tian
109
1
0
20 Jan 2025
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling
Matthew Burns
Qingyuan Hou
Michael Huang
345
1
0
08 Oct 2024
Quantum Langevin Dynamics for Optimization
Quantum Langevin Dynamics for Optimization
Zherui Chen
Yuchen Lu
Hao Wang
Yizhou Liu
Tongyang Li
AI4CE
66
11
0
27 Nov 2023
An Adaptive Empirical Bayesian Method for Sparse Deep Learning
An Adaptive Empirical Bayesian Method for Sparse Deep Learning
Wei Deng
Xiao Zhang
F. Liang
Guang Lin
BDL
91
44
0
23 Oct 2019
Non-convex learning via Stochastic Gradient Langevin Dynamics: a
  nonasymptotic analysis
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis
Maxim Raginsky
Alexander Rakhlin
Matus Telgarsky
64
518
0
13 Feb 2017
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic
  Modeling in Speech Recognition
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition
Albert Zeyer
P. Doetsch
P. Voigtlaender
Ralf Schluter
Hermann Ney
46
169
0
22 Jun 2016
Efficient approaches for escaping higher order saddle points in
  non-convex optimization
Efficient approaches for escaping higher order saddle points in non-convex optimization
Anima Anandkumar
Rong Ge
25
143
0
18 Feb 2016
Gradient Descent Converges to Minimizers
Gradient Descent Converges to Minimizers
Jason D. Lee
Max Simchowitz
Michael I. Jordan
Benjamin Recht
55
211
0
16 Feb 2016
Guarantees in Wasserstein Distance for the Langevin Monte Carlo Algorithm
Thomas Bonis
19
2
0
08 Feb 2016
Neural GPUs Learn Algorithms
Neural GPUs Learn Algorithms
Lukasz Kaiser
Ilya Sutskever
69
369
0
25 Nov 2015
Adding Gradient Noise Improves Learning for Very Deep Networks
Adding Gradient Noise Improves Learning for Very Deep Networks
Arvind Neelakantan
Luke Vilnis
Quoc V. Le
Ilya Sutskever
Lukasz Kaiser
Karol Kurach
James Martens
AI4CE
ODL
44
544
0
21 Nov 2015
Neural Random-Access Machines
Neural Random-Access Machines
Karol Kurach
Marcin Andrychowicz
Ilya Sutskever
OOD
BDL
60
156
0
19 Nov 2015
Neural Programmer: Inducing Latent Programs with Gradient Descent
Neural Programmer: Inducing Latent Programs with Gradient Descent
Arvind Neelakantan
Quoc V. Le
Ilya Sutskever
ODL
59
263
0
16 Nov 2015
On the Quality of the Initial Basin in Overspecified Neural Networks
On the Quality of the Initial Basin in Overspecified Neural Networks
Itay Safran
Ohad Shamir
56
127
0
13 Nov 2015
Efficient Learning of Linear Separators under Bounded Noise
Efficient Learning of Linear Separators under Bounded Noise
Pranjal Awasthi
Maria-Florina Balcan
Nika Haghtalab
Ruth Urner
36
94
0
12 Mar 2015
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor
  Decomposition
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition
Rong Ge
Furong Huang
Chi Jin
Yang Yuan
125
1,056
0
06 Mar 2015
Theoretical guarantees for approximate sampling from smooth and
  log-concave densities
Theoretical guarantees for approximate sampling from smooth and log-concave densities
A. Dalalyan
66
514
0
23 Dec 2014
Consistency and fluctuations for stochastic gradient Langevin dynamics
Consistency and fluctuations for stochastic gradient Langevin dynamics
Yee Whye Teh
Alexandre Hoang Thiery
Sebastian J. Vollmer
49
231
0
01 Sep 2014
Optimal rates for zero-order convex optimization: the power of two
  function evaluations
Optimal rates for zero-order convex optimization: the power of two function evaluations
John C. Duchi
Michael I. Jordan
Martin J. Wainwright
Andre Wibisono
59
480
0
07 Dec 2013
Optimal computational and statistical rates of convergence for sparse
  nonconvex learning problems
Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
Zhaoran Wang
Han Liu
Tong Zhang
85
175
0
20 Jun 2013
Regularized M-estimators with nonconvexity: Statistical and algorithmic
  theory for local optima
Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
Po-Ling Loh
Martin J. Wainwright
194
516
0
10 May 2013
Advances in Optimizing Recurrent Networks
Advances in Optimizing Recurrent Networks
Yoshua Bengio
Nicolas Boulanger-Lewandowski
Razvan Pascanu
ODL
72
522
0
04 Dec 2012
On the difficulty of training Recurrent Neural Networks
On the difficulty of training Recurrent Neural Networks
Razvan Pascanu
Tomas Mikolov
Yoshua Bengio
ODL
134
5,318
0
21 Nov 2012
1