ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.06293
  4. Cited By
Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

17 February 2018
Ashok Cutkosky
Francesco Orabona
ArXivPDFHTML

Papers citing "Black-Box Reductions for Parameter-free Online Learning in Banach Spaces"

26 / 26 papers shown
Title
Efficiently Solving Discounted MDPs with Predictions on Transition Matrices
Efficiently Solving Discounted MDPs with Predictions on Transition Matrices
Lixing Lyu
Jiashuo Jiang
Wang Chi Cheung
40
1
0
24 Feb 2025
Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting
Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting
Can Chen
Jun-Kun Wang
DeLMO
37
0
0
29 Oct 2024
Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement
  Learning
Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning
Aneesh Muppidi
Zhiyu Zhang
Heng Yang
32
4
0
26 May 2024
How Free is Parameter-Free Stochastic Optimization?
How Free is Parameter-Free Stochastic Optimization?
Amit Attia
Tomer Koren
ODL
39
4
0
05 Feb 2024
A simple uniformly optimal method without line search for convex
  optimization
A simple uniformly optimal method without line search for convex optimization
Tianjiao Li
Guanghui Lan
24
20
0
16 Oct 2023
Efficient Methods for Non-stationary Online Learning
Efficient Methods for Non-stationary Online Learning
Peng Zhao
Yan-Feng Xie
Lijun Zhang
Zhi-Hua Zhou
41
19
0
16 Sep 2023
Auditing Fairness by Betting
Auditing Fairness by Betting
Ben Chugg
Santiago Cortes-Gomez
Bryan Wilder
Aaditya Ramdas
MLAU
45
7
0
27 May 2023
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to
  Unknown Parameters, Unbounded Gradients and Affine Variance
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to Unknown Parameters, Unbounded Gradients and Affine Variance
Amit Attia
Tomer Koren
ODL
15
24
0
17 Feb 2023
DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
Maor Ivgi
Oliver Hinder
Y. Carmon
ODL
24
56
0
08 Feb 2023
Differentially Private Online-to-Batch for Smooth Losses
Differentially Private Online-to-Batch for Smooth Losses
Qinzi Zhang
Hoang Tran
Ashok Cutkosky
FedML
38
4
0
12 Oct 2022
Optimal Dynamic Regret in LQR Control
Optimal Dynamic Regret in LQR Control
Dheeraj Baby
Yu-Xiang Wang
22
16
0
18 Jun 2022
Nest Your Adaptive Algorithm for Parameter-Agnostic Nonconvex Minimax
  Optimization
Nest Your Adaptive Algorithm for Parameter-Agnostic Nonconvex Minimax Optimization
Junchi Yang
Xiang Li
Niao He
ODL
27
22
0
01 Jun 2022
Exploiting the Curvature of Feasible Sets for Faster Projection-Free
  Online Learning
Exploiting the Curvature of Feasible Sets for Faster Projection-Free Online Learning
Zakaria Mhammedi
10
8
0
23 May 2022
Making SGD Parameter-Free
Making SGD Parameter-Free
Y. Carmon
Oliver Hinder
15
41
0
04 May 2022
Parameter-free Mirror Descent
Parameter-free Mirror Descent
Andrew Jacobsen
Ashok Cutkosky
12
32
0
26 Feb 2022
Corralling a Larger Band of Bandits: A Case Study on Switching Regret
  for Linear Bandits
Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits
Haipeng Luo
Mengxiao Zhang
Peng Zhao
Zhi-Hua Zhou
18
17
0
12 Feb 2022
Optimal Dynamic Regret in Proper Online Learning with Strongly Convex
  Losses and Beyond
Optimal Dynamic Regret in Proper Online Learning with Strongly Convex Losses and Beyond
Dheeraj Baby
Yu-Xiang Wang
27
25
0
21 Jan 2022
PDE-Based Optimal Strategy for Unconstrained Online Learning
PDE-Based Optimal Strategy for Unconstrained Online Learning
Zhiyu Zhang
Ashok Cutkosky
I. Paschalidis
8
25
0
19 Jan 2022
Nonparametric Two-Sample Testing by Betting
Nonparametric Two-Sample Testing by Betting
S. Shekhar
Aaditya Ramdas
16
25
0
16 Dec 2021
Tight Concentrations and Confidence Sequences from the Regret of
  Universal Portfolio
Tight Concentrations and Confidence Sequences from the Regret of Universal Portfolio
Francesco Orabona
Kwang-Sung Jun
47
38
0
27 Oct 2021
Minimax Regret for Stochastic Shortest Path with Adversarial Costs and
  Known Transition
Minimax Regret for Stochastic Shortest Path with Adversarial Costs and Known Transition
Liyu Chen
Haipeng Luo
Chen-Yu Wei
16
32
0
07 Dec 2020
Online Learning with Imperfect Hints
Online Learning with Imperfect Hints
Aditya Bhaskara
Ashok Cutkosky
Ravi Kumar
Manish Purohit
14
57
0
11 Feb 2020
Matrix-Free Preconditioning in Online Learning
Matrix-Free Preconditioning in Online Learning
Ashok Cutkosky
Tamás Sarlós
ODL
8
16
0
29 May 2019
Anytime Online-to-Batch Conversions, Optimism, and Acceleration
Anytime Online-to-Batch Conversions, Optimism, and Acceleration
Ashok Cutkosky
13
7
0
03 Mar 2019
Online Adaptive Methods, Universality and Acceleration
Online Adaptive Methods, Universality and Acceleration
Kfir Y. Levy
A. Yurtsever
V. Cevher
ODL
20
88
0
08 Sep 2018
On the Convergence of Stochastic Gradient Descent with Adaptive
  Stepsizes
On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes
Xiaoyun Li
Francesco Orabona
32
290
0
21 May 2018
1