Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.01264
Cited By
Convex and Non-convex Optimization Under Generalized Smoothness
2 June 2023
Haochuan Li
Jian Qian
Yi Tian
Alexander Rakhlin
Ali Jadbabaie
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Convex and Non-convex Optimization Under Generalized Smoothness"
19 / 19 papers shown
Title
SAD Neural Networks: Divergent Gradient Flows and Asymptotic Optimality via o-minimal Structures
Julian Kranz
Davide Gallon
Steffen Dereich
Arnulf Jentzen
34
0
0
14 May 2025
On the Convergence of Adam-Type Algorithm for Bilevel Optimization under Unbounded Smoothness
Xiaochuan Gong
Jie Hao
Mingrui Liu
58
0
0
05 Mar 2025
Gradient-Based Multi-Objective Deep Learning: Algorithms, Theories, Applications, and Beyond
Weiyu Chen
Xiaoyuan Zhang
Baijiong Lin
Xi Lin
Han Zhao
Qingfu Zhang
James T. Kwok
84
4
0
19 Jan 2025
Understanding Adam Requires Better Rotation Dependent Assumptions
Lucas Maes
Tianyue H. Zhang
Alexia Jolicoeur-Martineau
Ioannis Mitliagkas
Damien Scieur
Simon Lacoste-Julien
Charles Guille-Escuret
38
3
0
25 Oct 2024
Error Feedback under
(
L
0
,
L
1
)
(L_0,L_1)
(
L
0
,
L
1
)
-Smoothness: Normalization and Momentum
Sarit Khirirat
Abdurakhmon Sadiev
Artem Riabinin
Eduard A. Gorbunov
Peter Richtárik
27
0
0
22 Oct 2024
Extended convexity and smoothness and their applications in deep learning
Binchuan Qi
Wei Gong
Li Li
63
0
0
08 Oct 2024
Recent Advances in Non-convex Smoothness Conditions and Applicability to Deep Linear Neural Networks
Vivak Patel
Christian Varner
31
0
0
20 Sep 2024
Empirical Tests of Optimization Assumptions in Deep Learning
Hoang Tran
Qinzi Zhang
Ashok Cutkosky
46
1
0
01 Jul 2024
Scalable Optimization in the Modular Norm
Tim Large
Yang Liu
Minyoung Huh
Hyojin Bahng
Phillip Isola
Jeremy Bernstein
54
12
0
23 May 2024
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
39
3
0
22 May 2024
The Challenges of Optimization For Data Science
Christian Varner
Vivak Patel
37
1
0
15 Apr 2024
On the Convergence of Adam under Non-uniform Smoothness: Separability from SGDM and Beyond
Bohan Wang
Huishuai Zhang
Qi Meng
Ruoyu Sun
Zhi-Ming Ma
Wei Chen
37
7
0
22 Mar 2024
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Aaron Mishkin
Ahmed Khaled
Yuanhao Wang
Aaron Defazio
Robert Mansel Gower
44
6
0
06 Mar 2024
Stochastic Weakly Convex Optimization Beyond Lipschitz Continuity
Wenzhi Gao
Qi Deng
27
1
0
25 Jan 2024
Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis
Jie Hao
Xiaochuan Gong
Mingrui Liu
33
7
0
17 Jan 2024
Parameter-Agnostic Optimization under Relaxed Smoothness
Florian Hübler
Junchi Yang
Xiang Li
Niao He
34
14
0
06 Nov 2023
A Novel Gradient Methodology with Economical Objective Function Evaluations for Data Science Applications
Christian Varner
Vivak Patel
34
2
0
19 Sep 2023
Convergence of Adam Under Relaxed Assumptions
Haochuan Li
Alexander Rakhlin
Ali Jadbabaie
37
57
0
27 Apr 2023
Acceleration Methods
Alexandre d’Aspremont
Damien Scieur
Adrien B. Taylor
233
120
0
23 Jan 2021
1