ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.09511
  4. Cited By
On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality

On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality

23 January 2023
Lu Xia
M. Hochstenbach
Stefano Massei
ArXivPDFHTML

Papers citing "On the Convergence of the Gradient Descent Method with Stochastic Fixed-point Rounding Errors under the Polyak-Lojasiewicz Inequality"

13 / 13 papers shown
Title
Error Analysis of Sum-Product Algorithms under Stochastic Rounding
Error Analysis of Sum-Product Algorithms under Stochastic Rounding
P. D. O. Castro
El-Mehdi El Arar
E. Petit
D. Sohier
90
0
0
19 Nov 2024
AdamL: A fast adaptive gradient method incorporating loss function
AdamL: A fast adaptive gradient method incorporating loss function
Lu Xia
Stefano Massei
ODL
62
3
0
23 Dec 2023
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
  Trained by Gradient Descent
Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
Spencer Frei
Quanquan Gu
41
26
0
25 Jun 2021
Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power
  Autonomous Flying Nano-UAVs
Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs
Daniele Palossi
Nicky Zimmerman
Luca Bompani
Francesco Conti
H. Müller
L. Gambardella
Luca Benini
Alessandro Giusti
Jérôme Guzzi
106
51
0
19 Mar 2021
Loss landscapes and optimization in over-parameterized non-linear
  systems and neural networks
Loss landscapes and optimization in over-parameterized non-linear systems and neural networks
Chaoyue Liu
Libin Zhu
M. Belkin
ODL
39
258
0
29 Feb 2020
Global Convergence of Deep Networks with One Wide Layer Followed by
  Pyramidal Topology
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology
Quynh N. Nguyen
Marco Mondelli
ODL
AI4CE
31
67
0
18 Feb 2020
Training Deep Neural Networks with 8-bit Floating Point Numbers
Training Deep Neural Networks with 8-bit Floating Point Numbers
Naigang Wang
Jungwook Choi
D. Brand
Chia-Yu Chen
K. Gopalakrishnan
MQ
48
500
0
19 Dec 2018
Low-Precision Floating-Point Schemes for Neural Network Training
Low-Precision Floating-Point Schemes for Neural Network Training
Marc Ortiz
A. Cristal
Eduard Ayguadé
Marc Casas
MQ
32
22
0
14 Apr 2018
When Does Stochastic Gradient Algorithm Work Well?
When Does Stochastic Gradient Algorithm Work Well?
Lam M. Nguyen
Nam H. Nguyen
Dzung Phan
Jayant Kalagnanam
K. Scheinberg
47
15
0
18 Jan 2018
Stability and Generalization of Learning Algorithms that Converge to
  Global Optima
Stability and Generalization of Learning Algorithms that Converge to Global Optima
Zachary B. Charles
Dimitris Papailiopoulos
MLT
30
162
0
23 Oct 2017
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
221
1,208
0
16 Aug 2016
Deep Learning with Limited Numerical Precision
Deep Learning with Limited Numerical Precision
Suyog Gupta
A. Agrawal
K. Gopalakrishnan
P. Narayanan
HAI
129
2,043
0
09 Feb 2015
Convergence Rates of Inexact Proximal-Gradient Methods for Convex
  Optimization
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
Mark Schmidt
Nicolas Le Roux
Francis R. Bach
134
582
0
12 Sep 2011
1