ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1502.06134
  4. Cited By
Learning with Square Loss: Localization through Offset Rademacher
  Complexity

Learning with Square Loss: Localization through Offset Rademacher Complexity

21 February 2015
Tengyuan Liang
Alexander Rakhlin
Karthik Sridharan
ArXivPDFHTML

Papers citing "Learning with Square Loss: Localization through Offset Rademacher Complexity"

8 / 8 papers shown
Title
Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss
Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss
Ingvar M. Ziemann
Stephen Tu
George J. Pappas
Nikolai Matni
67
9
0
08 Feb 2024
On aggregation for heavy-tailed classes
On aggregation for heavy-tailed classes
S. Mendelson
72
28
0
25 Feb 2015
Learning without Concentration for General Loss Functions
Learning without Concentration for General Loss Functions
S. Mendelson
159
65
0
13 Oct 2014
Online Nonparametric Regression
Online Nonparametric Regression
Alexander Rakhlin
Karthik Sridharan
174
101
0
11 Feb 2014
Learning without Concentration
Learning without Concentration
S. Mendelson
219
333
0
01 Jan 2014
Empirical entropy, minimax regret and minimax risk
Empirical entropy, minimax regret and minimax risk
Alexander Rakhlin
Karthik Sridharan
Alexandre B. Tsybakov
164
81
0
06 Aug 2013
Learning subgaussian classes : Upper and minimax bounds
Learning subgaussian classes : Upper and minimax bounds
Guillaume Lecué
S. Mendelson
132
86
0
21 May 2013
Deviation optimal learning using greedy Q-aggregation
Deviation optimal learning using greedy Q-aggregation
Dong Dai
Philippe Rigollet
Tong Zhang
60
69
0
12 Mar 2012
1