Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.01155
Cited By
v1
v2
v3 (latest)
Stochastic gradient descent for hybrid quantum-classical optimization
2 October 2019
R. Sweke
Frederik Wilde
Johannes Jakob Meyer
Maria Schuld
Paul K. Fährmann
Barthélémy Meynard-Piganeau
Jens Eisert
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Stochastic gradient descent for hybrid quantum-classical optimization"
17 / 17 papers shown
Title
Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits
Kaining Zhang
Liu Liu
Min-hsiu Hsieh
Dacheng Tao
125
63
0
20 Feb 2025
Diffusion-Inspired Quantum Noise Mitigation in Parameterized Quantum Circuits
Hoang-Quan Nguyen
Xuan-Bac Nguyen
Samuel Yen-Chi Chen
Hugh Churchill
Nicholas Borys
Samee U. Khan
Khoa Luu
DiffM
75
5
0
02 Jun 2024
Training-efficient density quantum machine learning
Brian Coyle
El Amine Cherrat
Nishant Jain
Natansh Mathur
Snehal Raj
Skander Kazdaghli
Iordanis Kerenidis
101
5
0
30 May 2024
Stochastic noise can be helpful for variational quantum algorithms
Junyu Liu
Frederik Wilde
A. A. Mele
Liang Jiang
Jens Eisert
Jens Eisert
61
34
0
13 Oct 2022
Noise-Resilient Variational Hybrid Quantum-Classical Optimization
Laura Gentini
A. Cuccoli
S. Pirandola
P. Verrucchi
L. Banchi
51
26
0
13 Dec 2019
Parameterized quantum circuits as machine learning models
Marcello Benedetti
Erika Lloyd
Stefan H. Sack
Mattia Fiorentini
107
895
0
18 Jun 2019
Tight Dimension Independent Lower Bound on the Expected Convergence Rate for Diminishing Step Sizes in SGD
Phuong Ha Nguyen
Lam M. Nguyen
Marten van Dijk
LRM
50
31
0
10 Oct 2018
Stochastic Gradient Descent with Biased but Consistent Gradient Estimators
Jie Chen
Ronny Luss
79
45
0
31 Jul 2018
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Xiaoyun Li
Francesco Orabona
69
298
0
21 May 2018
Supervised learning with quantum enhanced feature spaces
Vojtěch Havlíček
A. Córcoles
K. Temme
A. Harrow
A. Kandala
J. Chow
J. Gambetta
84
1,833
0
30 Apr 2018
An Alternative View: When Does SGD Escape Local Minima?
Robert D. Kleinberg
Yuanzhi Li
Yang Yuan
MLT
77
317
0
17 Feb 2018
Don't Decay the Learning Rate, Increase the Batch Size
Samuel L. Smith
Pieter-Jan Kindermans
Chris Ying
Quoc V. Le
ODL
107
996
0
01 Nov 2017
An overview of gradient descent optimization algorithms
Sebastian Ruder
ODL
206
6,203
0
15 Sep 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
280
1,221
0
16 Aug 2016
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.1K
150,364
0
22 Dec 2014
Scalable Kernel Methods via Doubly Stochastic Gradients
Bo Dai
Bo Xie
Niao He
Yingyu Liang
Anant Raj
Maria-Florina Balcan
Le Song
150
230
0
21 Jul 2014
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
201
2,273
0
28 Jun 2011
1