ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.10082
  4. Cited By
A simple uniformly optimal method without line search for convex
  optimization
v1v2 (latest)

A simple uniformly optimal method without line search for convex optimization

16 October 2023
Tianjiao Li
Guanghui Lan
ArXiv (abs)PDFHTML

Papers citing "A simple uniformly optimal method without line search for convex optimization"

27 / 27 papers shown
Title
Optimal and parameter-free gradient minimization methods for convex and
  nonconvex optimization
Optimal and parameter-free gradient minimization methods for convex and nonconvex optimization
Guanghui Lan
Yuyuan Ouyang
Zhe Zhang
43
7
0
18 Oct 2023
Normalized Gradients for All
Normalized Gradients for All
Francesco Orabona
95
10
0
10 Aug 2023
Adaptive Proximal Gradient Method for Convex Optimization
Adaptive Proximal Gradient Method for Convex Optimization
Yura Malitsky
Konstantin Mishchenko
62
26
0
04 Aug 2023
Accelerated stochastic approximation with state-dependent noise
Accelerated stochastic approximation with state-dependent noise
Sasila Ilandarideva
A. Juditsky
Guanghui Lan
Tianjiao Li
69
8
0
04 Jul 2023
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent
  Method
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
Ahmed Khaled
Konstantin Mishchenko
Chi Jin
ODL
67
28
0
25 May 2023
DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
Maor Ivgi
Oliver Hinder
Y. Carmon
ODL
109
66
0
08 Feb 2023
Learning-Rate-Free Learning by D-Adaptation
Learning-Rate-Free Learning by D-Adaptation
Aaron Defazio
Konstantin Mishchenko
69
85
0
18 Jan 2023
Adaptive proximal algorithms for convex optimization under local
  Lipschitz continuity of the gradient
Adaptive proximal algorithms for convex optimization under local Lipschitz continuity of the gradient
P. Latafat
Andreas Themelis
L. Stella
Panagiotis Patrinos
75
27
0
11 Jan 2023
Beyond the Golden Ratio for Variational Inequality Algorithms
Beyond the Golden Ratio for Variational Inequality Algorithms
Ahmet Alacaoglu
A. Böhm
Yura Malitsky
60
15
0
28 Dec 2022
Benchopt: Reproducible, efficient and collaborative optimization
  benchmarks
Benchopt: Reproducible, efficient and collaborative optimization benchmarks
Thomas Moreau
Mathurin Massias
Alexandre Gramfort
Pierre Ablin
Pierre-Antoine Bannier Benjamin Charlier
...
Binh Duc Nguyen
A. Rakotomamonjy
Zaccharie Ramzi
Joseph Salmon
Samuel Vaiter
92
36
0
27 Jun 2022
Accelerated first-order methods for convex optimization with locally
  Lipschitz continuous gradient
Accelerated first-order methods for convex optimization with locally Lipschitz continuous gradient
Zhaosong Lu
Sanyou Mei
53
8
0
02 Jun 2022
Making SGD Parameter-Free
Making SGD Parameter-Free
Y. Carmon
Oliver Hinder
86
47
0
04 May 2022
Simple and optimal methods for stochastic variational inequalities, II:
  Markovian noise and policy evaluation in reinforcement learning
Simple and optimal methods for stochastic variational inequalities, II: Markovian noise and policy evaluation in reinforcement learning
Georgios Kotsalis
Guanghui Lan
Tianjiao Li
OffRL
57
32
0
15 Nov 2020
Simple and optimal methods for stochastic variational inequalities, I:
  operator extrapolation
Simple and optimal methods for stochastic variational inequalities, I: operator extrapolation
Georgios Kotsalis
Guanghui Lan
Tianjiao Li
90
61
0
05 Nov 2020
Adaptive Gradient Methods for Constrained Convex Optimization and
  Variational Inequalities
Adaptive Gradient Methods for Constrained Convex Optimization and Variational Inequalities
Alina Ene
Huy Le Nguyen
Adrian Vladu
ODL
75
29
0
17 Jul 2020
Lipschitz and Comparator-Norm Adaptivity in Online Learning
Lipschitz and Comparator-Norm Adaptivity in Online Learning
Zakaria Mhammedi
Wouter M. Koolen
68
57
0
27 Feb 2020
Online Adaptive Methods, Universality and Acceleration
Online Adaptive Methods, Universality and Acceleration
Kfir Y. Levy
A. Yurtsever
Volkan Cevher
ODL
71
93
0
08 Sep 2018
On the Convergence of Stochastic Gradient Descent with Adaptive
  Stepsizes
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Xiaoyun Li
Francesco Orabona
76
298
0
21 May 2018
Black-Box Reductions for Parameter-free Online Learning in Banach Spaces
Black-Box Reductions for Parameter-free Online Learning in Banach Spaces
Ashok Cutkosky
Francesco Orabona
92
148
0
17 Feb 2018
Random gradient extrapolation for distributed and stochastic
  optimization
Random gradient extrapolation for distributed and stochastic optimization
Guanghui Lan
Yi Zhou
56
52
0
15 Nov 2017
Online to Offline Conversions, Universality and Adaptive Minibatch Sizes
Online to Offline Conversions, Universality and Adaptive Minibatch Sizes
Kfir Y. Levy
ODL
89
59
0
30 May 2017
Online Learning Without Prior Information
Online Learning Without Prior Information
Ashok Cutkosky
K. Boahen
ODL
52
74
0
07 Mar 2017
Coin Betting and Parameter-Free Online Learning
Coin Betting and Parameter-Free Online Learning
Francesco Orabona
D. Pál
181
166
0
12 Feb 2016
An optimal randomized incremental gradient method
An optimal randomized incremental gradient method
Guanghui Lan
Yi Zhou
155
220
0
08 Jul 2015
No-Regret Algorithms for Unconstrained Online Convex Optimization
No-Regret Algorithms for Unconstrained Online Convex Optimization
Matthew J. Streeter
H. B. McMahan
ODL
82
90
0
09 Nov 2012
Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic
  Programming
Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming
A. Belloni
Victor Chernozhukov
Lie Wang
197
675
0
28 Sep 2010
A parameter-free hedging algorithm
A parameter-free hedging algorithm
Kamalika Chaudhuri
Y. Freund
Daniel J. Hsu
405
143
0
16 Mar 2009
1