ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.06915
  4. Cited By
Stochastic linear optimization never overfits with quadratically-bounded
  losses on general data

Stochastic linear optimization never overfits with quadratically-bounded losses on general data

14 February 2022
Matus Telgarsky
ArXivPDFHTML

Papers citing "Stochastic linear optimization never overfits with quadratically-bounded losses on general data"

4 / 4 papers shown
Title
Rates of Convergence in the Central Limit Theorem for Markov Chains, with an Application to TD Learning
Rates of Convergence in the Central Limit Theorem for Markov Chains, with an Application to TD Learning
R. Srikant
44
5
0
28 Jan 2024
Unconstrained Online Learning with Unbounded Losses
Unconstrained Online Learning with Unbounded Losses
Andrew Jacobsen
Ashok Cutkosky
32
16
0
08 Jun 2023
Actor-critic is implicitly biased towards high entropy optimal policies
Actor-critic is implicitly biased towards high entropy optimal policies
Yuzheng Hu
Ziwei Ji
Matus Telgarsky
60
11
0
21 Oct 2021
A High Probability Analysis of Adaptive SGD with Momentum
A High Probability Analysis of Adaptive SGD with Momentum
Xiaoyun Li
Francesco Orabona
92
65
0
28 Jul 2020
1