ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.04379
  4. Cited By
Langevin Dynamics with Continuous Tempering for Training Deep Neural
  Networks

Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks

13 March 2017
Nanyang Ye
Zhanxing Zhu
Rafał K. Mantiuk
ArXivPDFHTML

Papers citing "Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks"

9 / 9 papers shown
Title
Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm
Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm
Oleksandr Borysenko
Mykhailo Bratchenko
Ilya Lukin
Mykola Luhanko
Ihor Omelchenko
Andrii Sotnikov
Alessandro Lomi
50
0
0
17 Feb 2025
Provable Convergence and Limitations of Geometric Tempering for Langevin Dynamics
Provable Convergence and Limitations of Geometric Tempering for Langevin Dynamics
Omar Chehab
Anna Korba
Austin Stromme
Adrien Vacher
35
2
0
13 Oct 2024
Calibrating AI Models for Wireless Communications via Conformal
  Prediction
Calibrating AI Models for Wireless Communications via Conformal Prediction
K. Cohen
Sangwoo Park
Osvaldo Simeone
S. Shamai
32
6
0
15 Dec 2022
A Survey of Uncertainty in Deep Neural Networks
A Survey of Uncertainty in Deep Neural Networks
J. Gawlikowski
Cedrique Rovile Njieutcheu Tassi
Mohsin Ali
Jongseo Lee
Matthias Humt
...
R. Roscher
Muhammad Shahzad
Wen Yang
R. Bamler
Xiaoxiang Zhu
BDL
UQCV
OOD
41
1,111
0
07 Jul 2021
Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC
  via Variance Reduction
Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction
Wei Deng
Qi Feng
G. Karagiannis
Guang Lin
F. Liang
33
8
0
02 Oct 2020
Predicting the outputs of finite deep neural networks trained with noisy
  gradients
Predicting the outputs of finite deep neural networks trained with noisy gradients
Gadi Naveh
Oded Ben-David
H. Sompolinsky
Z. Ringel
11
20
0
02 Apr 2020
On the Convergence of Stochastic Gradient MCMC Algorithms with
  High-Order Integrators
On the Convergence of Stochastic Gradient MCMC Algorithms with High-Order Integrators
Changyou Chen
Nan Ding
Lawrence Carin
37
158
0
21 Oct 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
287
2,890
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1